Geometry Nodes

You mean like the old use count feature?
suggested here?
i’d also want such feature

Note that It’s possible with a point separate node but it’s not handy

1 Like

Yes, precisely this!

Yeah for sure! If my only goal was the final image then I would have employed more of Blender here but this was primarily a stress test and an excuse to push myself to see what workflows I can benefit with Geometry Nodes :grin:

2 Likes

Yes, and these lists (technically arrays) need to satisfy certain requirements in order to remain valid for the geometry. This is taken care of automatically, hence the dependence on topology. This is a pretty set in stone issue.

I’m confused here as to how this can’t already be solved. There is already a way to do this against an attractor’s vector using the attribute math nodes. The attribute processor will do basically what you are asking in terms of breaking down multiple attributes into values and letting you do math/logic on them in the most efficient/parallel way possible and pipe them out into return attributes. It’s soo similar to a compute shader in fact it could possibly be (somewhat) trivially offloaded to the gpu in the future. That is why it has been given a special type of node group.

If you are talking about the attractor being a set of vertices/points, thats an issue that needs more than just a simple set of arrays has been solved with an optimized acceleration structure using the attribute proximity node.

1 Like

It seems the only thing is to wait patiently and see how the processor works in practice.

2 Likes

I hope there will be some sprint dedicated toward node/viewport performance ?

here is a really large plane, there’s billion of particles supposedly generated on this plane

But that’s ok because right after spawning them i mask them with a vertex group so only these few points are visible in the end, so after we mask points performance of the nodetree should be better, right?

well not at all,
the slightest interaction, tweaking these few masked points position, or in worst cases, even adding a new unconnected node trigger a recalculation of the whole nodetree.
so masking an area in order to improve the nodetree performance won’t actually work. (viewport perfs will be better of course, but not when working with the tree)

ddd

I really hope there will be some focus on how blender do it’s update internally one day :sweat_smile:

the fact that sprint scenes are small scenes don’t help IMO

14 Likes

Performance for my still life was similar towards the end and that was just lots of operations but not so many instances. Not sure how much of it rests on geometry nodes vs just a Blender issue but it definitely needs addressing. For me, navigating the viewport was very choppy so I wonder if it’s also something to do with how the viewport is drawing instances.

7 Likes

I am waiting eagerly for some news regarding the performance as well. Our team works primarily on larger scale projects (but nothing extreme - described here Geometry Nodes) and Blender currently doesn’t offer any smooth workflow to cope with that. So we are forced to use many inconvenient workarounds to finish the projects somehow (weight / vertex painting, scattering using particles, mesh editing … everything gets extremely laggy once models get some polygons, few modifiers, particle systems - which means every time when your project is not just a simple testing 20x20m demo-scenery you can see everywhere on YouTube). I am pretty sure today’s HW should be able to handle such tasks easily.

I appreciate all the new cool features, but the poor performance just doesn’t allow us to take advantage of any of these until Blender gets improved a lot in this area (or at least fixed - because sometimes it 2.9 has worse performance than 2.79 - i.e. vertex painting larger mesh).

14 Likes

I’m gladly joining the “performance moaners club” with you guys(@BD3D, @Erindale, @jendabek) :star_struck: I suffer from all the above mentioned issues.

6 Likes

I had the same issue with shader nodes, working with nodes in larger scenes becomes quite a bit of a pain. Even just selecting and moving a node becomes frustrating and laggy.

Fingers crossed this issue gets some attention.

3 Likes

My completely uninformed opinion is that there should be a handful of releases centered on just performance. I get that the old cruft is preventing mesh editing from being snappy, but shiny new geometry nodes ? That’s a bummer. I’ve pretty much only tested GN in… “test” scenarios, apart from one shot in my last short movie, but that one was simple enough : scattering lowres corals on ocean bed.

5 Likes

I am waiting eagerly for some news regarding the performance as well. Our team works primarily on larger scale projects

Well tbh the fact that the viewport gets laggy is to be expected when there’s a lot of instances, that’s the cons of working with an openGL viewport and has nothing to do with geonode itself so I’m not sure why you are expecting geonode dev to resolve this problem. If there’s optimization planned, It need to be done on an instance level, so the whole instancing system get an optimization solution (such as culling distance that actually work, or switch to bounding box/points after threshold distance…) not only within geonode.

The fact that the nodetree is evaluating every nodes at each slightest update trigger is a separate issue directly linked to geonode itself.
I think we need to be quite precise when we talk about performance issues because we could quickly point to issues not related to geonode itself and that is not helping and not related to this thread :thinking:

5 Likes

Yeah, I am running into the exactly same issues. A “bunch of pebbles on 5x5 meter ground area” is not really a representative of what people usually need the scattering solution to do. In fact on such small areas, it often makes sense to use some sort of painting tools to paint the instanced directly.

Here are some actual, realistic example of scenes people truly “need” the scattering solutions for:
https://forum.corona-renderer.com/index.php?topic=32615&fbclid=IwAR0NF09pV3J6dlPjXzspyjL-zz5NJYJjvRqI6kgLJ2vZKinKUlaqVjKTr9A
https://forum.corona-renderer.com/index.php?topic=32535&fbclid=IwAR3eHdDWclyAHZxf3qsF5hauAQT6Vex7FhtHDp6Wakg8rUyM1aYuxd2Wuw0
https://forum.corona-renderer.com/index.php?topic=32478&fbclid=IwAR2dltCCtqZBXr5uHLqPTC_uht0XR5fmPWuQmmzHLiIvptM385s4ONBXLvs
https://forum.corona-renderer.com/index.php?topic=32416&fbclid=IwAR2s7Tkuld8efMAQIEP3Bbf6sBO1h2kFXBpYaPSHf96fqpHdExuMwvj4Q1M
https://forum.corona-renderer.com/index.php?topic=32287&fbclid=IwAR2dltCCtqZBXr5uHLqPTC_uht0XR5fmPWuQmmzHLiIvptM385s4ONBXLvs
https://forum.corona-renderer.com/index.php?topic=31906&fbclid=IwAR381ozVStNc8rLUZkL9tth6SMZciWLq4F6XpEVfPVJffBQ54GlFTVPbBgY
https://forum.corona-renderer.com/index.php?topic=31931&fbclid=IwAR0wDkE0_eGZR6xsbrLHEdYqbpg3qdFIU68ZleTOCbOFDKimO8Fyh59EHDA
https://forum.corona-renderer.com/index.php?topic=31971&fbclid=IwAR0raWW_IxD6XnJpc8alSpRknoYFYnXylmqobzk2xTjlz7XSZvxBCjhpcoY
https://forum.corona-renderer.com/index.php?topic=31862&fbclid=IwAR381ozVStNc8rLUZkL9tth6SMZciWLq4F6XpEVfPVJffBQ54GlFTVPbBgYhttps://forum.corona-renderer.com/index.php?topic=31674&fbclid=IwAR2t4EWOqFoKQniJg_8wglmca3xUApEDFI8837gQtFx8JYwzBZuSDrJcuPw

For any proper scattering solution some sort of proxy system for the rasterized viewport is absolutely crucial. While the ray tracers can usually display almost infinite amount of geometry at once due to memory instancing, GPU rasterization can not.

Bottom line is that the example scattering use cases for the original initial geometry nodes release significantly underestimated what scattering means these days. The instancing on those exemplar use cases would not be impressive even in 90’s :slight_smile:

4 Likes

I think all users pointing at the performance issues here have been “quite precise” so far.

No one is expecting that geonodes devs will solve these problems, but it’s necessary to point to these issues in context of GeoNodes project as GeoNodes make viewport/weight painting performance, unnecessary node tree evaluation… etc. so apparent.

What about the Vulkan api, which they are (supposedly) working on for EEVEE, does it have these same “OpenGL limitations” ?

If not, will Vulkan be the api driving workbench engine ?

No really, it’s related to rasterization in general, that’s why viewport of clarisse is so performant for example (raster is an algo per tris, as ray tracing is per rays, so raster in general scale badly when there’s a too much tris)
anyway all this is becoming slightly off-topic :crazy_face:

I agree with @LudvikKoutny

Those examples are a much more realistic production situation than the examples shown and created during development, and that is the kind of scenes I talked about a long time ago, about the complexity needed to evaluate if the goals are being achieved, both in terms of functionality and performance.

There is one main difference between those scenes and the examples shown here, for example from the magician @Erindale and it is the view distance, the amplitude of the shots, usually the examples shown here are short on detail (like the city one, good work :slight_smile: but it lacks a lot of detail to be a belivable city) or the forest done by @Erindale, which is a short shot.

So some kind of recreation, similar recreation to that example Corona scene to benchmark the distribution system could be very welcome, it’s not only a matter of nodes, but a matter of practicity and usability.

Keep in mind that short shots are not what is requested in general in nowadays projects TBH, I’m always dealing with long shots, with the need of rocks, walls, grass, trees, small details precisely located (not just randomly distributed per area), streets, prop buildings and a long list of fine detail distribution.

I find the nodes rather complex for the intermediate user, too much maths needed and there are no high level nodes programmed with nodes inside (to be able to learn), but that could be a matter of time, a system like this takes time to mature :slight_smile:

2 Likes

Regarding performance: The project currently used to benchmark performance and its limits is Sprite Fright. It was a conscious decision to settle for this. The sample files shared (pebbles, flowers) are a (small) representation of the workflow. But for the final performance the film itself is what is taken into consideration.

Also as many have pointed out, some of the performance limitations come from Blender itself. And as such needs to be tackled more systemically.

Additionally if anyone wants to work (develop) on better performance, this is welcome. I would recommend profiling a real production scene that is being capped at the current performance. Sometimes there are interesting surprises there.

9 Likes

as many have pointed out, some of the performance limitations come from Blender itself.

And what do you think about the limitation coming from geometry-node directly ?
mainly the fact that the whole nodetree is re-evaluated systematically as the example of @BD3D shown above

Maybe this is not linked with geonode? hmmm

1 Like

That sounds reassuring . Can you talk at least roughly what level of complexity is expected in the biggest scenes for the final movie? It’s hard for us to know what to expect from the few bits of concept art that were posted.

I wouldn’t get my hopes too high for Vulkan. Just switching to new api won’t magically improve performance. Besides, sufficient viewport optimizations can be done right now with OpenGL, but they depend on other systems in Blender, so it will probably take some time.

2 Likes