Geometry Nodes

Is the concern on performance based on the amount of point calculation? Or geometery it creates? I was just thinking it would be nice to have a node that can decimate geometry to a bounding box for viewports and full geometry for rendering, like the lodify.py script.

1 Like

Whether Blender uses Vulkan for rendering doesn’t really have anything to do with the geometry nodes performance. At best it’s a tangential relationship.

The performance being discussed is of the “Point Distribute” node. Algorithms that distribute points in nice patterns like the “Poisson Disk” scattering pursued just tend to be performance heavy.

1 Like

The same is true for the software with the big H that mustn’t be mentioned in this forum.
If you scatter 1.000.000 points on a grid in a “dumb” way it’s rather fast but results in ugly patterns with a lot of overlapping point positions. In some places there are clumps of points in others there are holes without any points.
If you the activate the “relax” option, the whole process gets slower… a lot slower. This has nothing to do with the graphics card or viewport display but with the underlying algorithms or methods to push points apart. That’s also the reason why particle based fluid simulations become slow so quickly, not because the GPU can’t display them any faster but because there’s a lot going on under the hood.

EDIT: OK, sorry. I just gave the blend file from JuanGea a try and now I know what you mean. It’s the pure viewport performance you’re talking about. And it’s, well, quite sluggish.

3 Likes

I don’t see how the distribution can slow the rendering of the viewport. The points must be calculated one time, not every frame.

Just for the protocol:
If I take the “1million_test.blend” file and rebuild the same setup with the former “Dupliverts” workflow by parenting the icosphere to the grid containing that 1mio+ verts and setting the “Instancing” to “Vertices” I get exactly the same viewport performance (in my case 2 fps with a 1080ti) as with the Geometry Nodes branch and the original blend file.
So the culprit is not the new Geometry Nodes but the viewport itself.

2 Likes

Hey I did some mockups / have a few calls into question

about the workflow design,
when i see the @dfelinto designs the workflow is working with mainly geo data that is passed around. and attr data only available as inputs

if we want a flexible workflow, shouldn’t we be able to separate attr from geo and do attr manipulation from the attr types?

Aren’t attributes mostly just floats or vectors ? (even point positions and normals ?)

@HooglyBoogly even if the math nodes are within a special for_each attribute values box?
so from the user perspective, it would be visually the same experience, even if the code of the nodes need to change once inside the box.
I believe it’s quite important/basic to introduce loops to users if we are working with arrays of values

*assuming the workflow would let the possibility of separating attr from geo ofc

3 Likes

@SteffenD well, yes, you get the same performance problem, but the thing is that as a user I don’t know if the culptrip is the viewport or the new geometry nodes, but there is a big performance problem here, and keep in mind that this is a new system being developed for more than a year from the ground up.

The same happens if you enter in edit mode in a mesh with 1 million vertices, and we cannot speak about a 10 million vertices mesh, you will suffer A LOT, and that must have a relation with Geometry Nodes, so I doubt it’s a viewport related problem.

THB I think the pebbles example it’s a bit cheat-loner, it’s a super small plane, who could want to scatter pebbles in such a small area?

The pebbles example should have been a big area, like 100m x 100m, and there you can see if the scatter solution works or not, without that kind of test it’s like a proof-of-concept, but not a properly designed, production ready tool IMHO.

IMHO the sprint scenes should aim to a higher standard, not to a comfortable one, I mean, the target may not change, but you won’t see the same doing a 2x2 plane scatter than a 100x100 plane scatter, the sprint target is the same, and the scene is “the same”, but you will be much more near a production situation than a paper/developer-art situation.

The funny thing is that I used the master branch (without anything Geometry Nodes) for my instancing test. So they can’t be the source of the evil :wink:

I’m not saying that the nodes are purely responsible, but the resulting performance is bad, as a user you don’t care if the performance problem comes from the nodes theme selves or if it comes from the underlying operations.

In your test you just did an action that gives awful performance too, but that does not mean that the same underlying operation is happening behind the scenes, or maybe it is.

Take as an example the subdivision surface node, it’s the same as the modifier, but it’s not the same, there is a new node that had to be implemented, I don’t know whats happening behind the scene of the instancer node, but what I know is that the resulting performance is bad, and if the Geometry Nodes system and effort has to show it’s power, it needs to have performance, maybe some performance problem have to be fixed before the nodes project can go ahead or something like that.

The only thing I can see right now is that in a production situation of instancing a million pebbles, as I said nothing too fancy, performance is awful, not sure there is the bottle neck, but it’s a bottle neck that should be fixed before this can be considered production ready, specially when we are talking that the first/main target of the sprint was scattering, right now it’s not a good solution for scattering I’m afraid, the reason on why is that happening is something that developers, doing a proper performance investigation can discover, but just adding nodes that do things is not going to be useful, we need to have those nodes AND performance.

I insisted in the performance part since the beginning of the particle nodes / everything nodes project, it’s key, without performance this will be useless in production.

2 Likes

The initial goal of the geometry nodes project is to be used in production (Sprites Fright). If performance becomes a showstopper we will find a way of prioritizing it.

9 Likes

I’m glad to hear that, I hope the sprints can also take bigger scenes for their development and testing, that will help to ensure, at least, a part of the performance :slight_smile:

2 Likes

done

I’m not sure I can give the best answer to this question-- there was a fair amount of design discussion before I was involved in the project. But it’s a good question so I’ll try.

Honestly, I think splitting attributes from their geometries will cause more problems than it fixes, then you have to also keep a mental model about which geometry the attribute is from, then you’re keeping track of two things when you could just keep track of one thing. For example, in the screenshot you’ve provided, where there was once a fairly simple progression of nodes, now there is an extra node, and two paths you have to follow to understand the node tree.

This doesn’t mean there can’t be powerful ways to transfer attributes from one geometry to another. Using nodes and a more generic attribute system will give us an opportunity to rethink the functionality of the data transfer modifier, which will hopefully be a nice improvement. It might be a bit before the team gets to work on that sort of thing though, we’ll see.

I think eventually we might want such a system, especially for more complex deformation in nodes. But we’ve decided that working on the attribute level where every node modifies the geometry in some complete way makes more sense for the basic design of the system. One reason is that Blender already works this way-- a node is sort of analogous to a change you make to an object in edit mode or with an operator. My hunch is that will be useful in the future. Another reason is that in a data-flow system like we have, having such a “box” would introduce a separate paradigm, making the system harder to learn.

I’m sure there are other reasons too.

2 Likes

I’m all for that direction, users only have one connection to make/unmake between nodes, it’s simpler.

I don’t understand the need of a separate attribute editor. The attribute, normally, will be a extra info for point/vertex/face/object like for example:

randomValue (point)
vertexColor (vertex)
faceColor (face)
customData (object)

That you can edit like position, normal, group,… and it’s not needed to be separate of the geometry. Maybe the problem is to think in a “geometry” data instead of a simple flow data that contain different type of “primitives” like point/vertex/face each one with some type of integer/float/vector data attached to each value

Thanks for taking the time to talk about this design decision

I completely understand this decision, but then, why is there attr nodal inputs?
Not sure, but it seems that this is a workflow incoherence then? maybe there’s a plan to support both data-flow later in time?

Seeing attr types input directly inside the node editor implies to user that it’s possible to manipulate attr directly, operations such add and extract attr from their geo as i illustrated above. personally i wanted to do that instinctively, just because i saw these inputs.

Maybe it’s best to hide attr from user (just like the image above), until a functional attr data-flow is implemented?
As i strongly believe showing the attr inputs may misleads users into wanting to separate attr from their geos

2 Likes

Maybe to be able to expose the attribute name to the modifier ui? Just guessing though.

Maybe to be able to expose the attribute name to the modifier ui? Just guessing though.

Indeed, but they could hide the inputs, that’s not a problem.
I wonder what would be the usage of these inputs, if the data-flow wasn’t meant to use them

There is no reason the attribute fields can’t offer intellisense style auto-complete suggestions so I’m inclined to agree.

Does Geometry Nodes plan to design a Scripts node?To run the script directly on the node module, whose properties can interact with the slider on the node.Use scripts to create points, edges, and faces, and control the position, size, rotation, and scaling of points, edges, and faces.Nodes can pass and compute data from one node to another, which is what I want most.