Geometry Nodes

dynamic paint set-up using the volume of a generated FOV cone to substract weight. (that’s why i suggested a fourth mode to the new attr proximity node, the volume option! quite important)
So no, it’s not using the nodes.

Maybe we can clip particles not in camera FOV by comparing the points position with the camera orientation & fovangle? :thinking:

Oh, you’re using a mesh overlaid with the camera, I didn’t catch that. Clever !

I’m too weak at maths to figure this out quickly, but I imagine convert the points to camera space by multiplying them with the camera transform matrix. However, I’m not sure this would give you enough information…

Well i’m almost sure it’s possible by using simple pythagoras, i could do it with python but translating a formula with math nodes it’s more compicated of course

anyway i’m just throwing the idea here :slight_smile:

Hello

There’s numerous properties that cannot be used as input ?

Is there a plan to have them as custom input??
i’m not sure why there’s this limitation

this is causing problem, forcing the creation of unique nodegroup just to change one single property

I am of the opinion that implimenting instance culling and LOD selection should be handled internally by blender rather than by the user’s own node logic. If your issue is the amount of load that many polys/instances puts on the rasterizer/vertex shader or ray traversal then that should be handled by the renderer figuring out how to ballance the work load without impacting the image too much. The artist should not have to invent a system like this and only focus on how to compose their shot.

1 Like

it’s not just a real-time optimization… LODs are even used in film vfx- though not in quite the same systemic way as a game engine like UE4. The reality is that rendering take a lot of time, and the more geometry and complex materials you have in your scene, the longer it takes to render- so if you reduce those things your renders get faster.

now, we could discuss whether or not LODs are something we need to be thinking about for Blender right now (I personally think it’s putting the cart before the horse when we lack so much other basic functionality), but their benefit to any 3D industry is undeniable.

Ive seen countless demonstration of blender rendering billions of complex tree instances with extreme ease, so I’m not sure that I get your argument here :thinking:

Maybe i’m thinking about particles & scattering too much here? but well it’s quite the goal of this thread
( related : Offloading heavy geometry - #72 by BD3D)

@BD3D does it have to be discussed in this thread? I’m pretty sure that devs are aware of current performance issues and also poor performance with lots of instances is not problem just of Geometry nodes module.

@Kenzie

The artist should not have to invent a system like this and only focus on how to compose their shot.

I second that really!

1 Like

Yeah that could result in an interesting camera clipping node! Isn’t it?

Maybe we can clip particles not in camera FOV by comparing the points position with the camera orientation & fovangle? :thinking:

Maybe it can be done already tho

Ease of rendering is a feeling rather subjective.
Feeling will definitely not be the same if you have a renderfarm at disposal or just a laptop.

Whatever the system used is (nodes, modifiers, driven library overrides…) , a simple way to do it will be welcomed.

To make working on large projects easier, GN needs some kind of “switch” node that can select different inputs to display in OpenGl viewport, interactive rendering and for the final render. This way it would be possible to set a lower point distribution density in the viewport, pick simplified mesh, set lower subdivision level, etc.
Also it would be very convenient if instanced objects could be displayed as point clouds. I though that maybe this could be achieved by adding nodes modifier on the source object with only a point distribute node inside, but this method has too many drawbacks. Main one is that the modifiers cannot be disabled for interactive rendering, only for final render. So it would be useless during the scene creation. Also the points themselves have weird shading and fixed size in scene-space, whereas point in a point cloud should have a fixed size in screen-space to better display object shape when zooming out. Best example in Blender is to delete all faces and edges in a mesh and also a percentage of vertices. Remaining vertices basically look like a point cloud.

Subscribe here then https://developer.blender.org/T82876

1 Like

What about a phyllotaxis distribution for making flowers and Plants ?

15 Likes

Blender does not support Enum properties for sockets. But it would be useful to have indeed. I don’t know probably it will be fixed somewhen.

4 Likes

Agree with the viewport / render time result, wouldn’t that be as easy as creating to sockets in the output? One for render and one for viewport, in the end such differentiation already exists in the modifier.

I reckon this would lead to potentially a lot of duplication in the node tree, whereas if it’s exposed as a variable as in Pablo’s mockup you can keep a single nodetree and just differentiate through that variable.

Yes, that could be a good solution too :slight_smile:

Nice artwork. How would such a phyllotaxis distribution work? Are there any public/open papers that the community could pick up.

2 Likes

http://www.evsc.net/projects/phyllotaxis-sphere

http://algorithmicbotany.org/papers/abop/abop-ch4.pdf

6 Likes

Its there any way to transform geometry with vertex groups?