Is this just conjecture or have you profiled the performance? It would be good to have real data here to guide the conversation.
I do wonder if it makes sense to have a node specifically for camera culling, either to generate an attribute or for removing distributed points. It’s probably a very common need.
Is this just conjecture or have you profiled the performance?
Well Just did some simple observations with fraps :
Interacting with the viewport (normal): = 60FPS+
When displaying a lot* of particle instances : ±=1FPS (completely unusable)
When using camera clipping node, while moving the camera (displaying ±10% of the particles) : ±=8FPS (way better)
When using camera clipping node, while moving the camera, looking outside terrain (= no particles displayed) : ±=18FPS
Conclusion:
The consequent amount of math nodes did slow down the viewport when we move the camera as the calculation need done for a lot of enties
*this completely depends on the particle density of course, unfortunately there’s no way to know the exact particle count
Completely agreed,
IMO there should be a sprint dedicated to performance & working with a lot of points.
so far the demo scenes aren’t that computer intensive.
Having to scatter a large scene would be an interesting experiment
and yeah camera frustrum culling is a must
But it might be best to implement it through the OpenGL instancing code instead of a node? Depends if it’s only seen as a viewport optimization or the culling need to be done also at render time
Is there a way to get Normal data like the Geometry node in the shader editor? Looking for a normal based density attribute for slopes on terrains and such.
Thanks for the reply Miro.
Is the normal attribute implemented in 2.92.0? While this makes sense it doesn’t seem to to have any impact regardless of the B float number.
Thanks. We have been keeping performance in mind with each feature, even if that’s not the only thing we’re working on. The big issue with instances in the viewport is that the final instance objects are generated for every redraw based on the evaluated state of the initial objects. Using any node that writes data to make the instances real in the node tree might help viewport performance (at the cost of memory usage I assume).
In that screenshot you’re comparing a single float number with the normal, which is a vector. You might want to compare the normal with a vector instead.
I tried that, unfortunately it’s still the same thing.
The way I used it for shaders is split the normal vector and use the z axis through a color ramp to clamp the values.
With Geo nodes any values below 1 will spawn 0 instances while anything above 1 will make the max amount of duplicates regardless if I use a float or vector.
EDIT: Oops, just missed that there’s a threshold value available in the vector rollout.
Sorry I haven’t read the entire thread, but have temporal tools been discussed ?
Like referencing other frames, frame holds etc so we can compute things like deformation variations across time, perform temporal stability options on meshes (assuming same pt count) etc ?
I also have a question which I couldn’t find the answer searching this thread. Is it possible to pass value (as an attribute) from Geometry Nodes to Shader Nodes? I tried creating an a new attribute in GeoNodes and reading it in shader’s Attribute node, but it’s not working. I’ve seen mentioned here passing values through vertex color which I could work with, but can’t find a way to set vertex colors through GeoNodes. Any help welcome, thanks!
Maybe this is what you are looking for. There is a commit that makes possible to pass random attributes from geonodes to cycles. I’m not sure if it made if into master yet.
I haven’t done extensive performance testing, and the testing I have done was with dummy scenes.
My findings with performance were that viewport display and Eevee rendering benefitted most from camera FOV clipping.
Cycles had very minimal performance difference - though memory usage was reduced.
(I plan to do some more serious testing, and publish a scene and the numbers.)
I guess I agree with others that in an ideal world that the render engine should be able to efficiently cull objects that don’t affect the render result. In reality though we’re often making many optimisations ourselves to reduce render times. BTW if my node group couldn’t improve performance because Blender optimised the scene already - then that would be just perfect.
I see that the Blender Cloud team are using techniques like having a “viewport density” setting to improve viewport performance.
If you’re using Geometry Nodes object scattering very heavily, then this is just another tool that you can try.
Hey, I tried to do a matrix rotation nodegroup, but I have weird issues, can someone take a look ?
This is maybe because I lack knowledge in math, but it sound weird.
Stupid question from someone who only started looking at geometry nodes, now.
Are the attribute types the user can input into the attribute node fields (location, rotation, scale) actually documented somewhere, how many exist or is there a description on how the user actually can find out what an attribute is called so that it can be accessed?
Or are there only these three types?
The attribute types documentation page lists position and radius for example. And this has me a little confused, TBH.
What can be the input into those fields and how is the user supposed to find out?