2023-07-11 Nodes & Physics Module Meeting

Check the overview thread for more information about the meeting.

Present

  • Iliya Katueshenock
  • Jacques Lucke
  • Hans Goudey
  • Dalai Felinto

Since the Last Meeting

  • Hans continued work on node group operators, with some initial code doing the very basics in main now.
    • The next steps are in progress, with operator-specific input nodes and support for other object types and modes with asset traits on the schedule for this week.
  • Jacques is close to merging the Repeat Zone, which is the name for what we previously called “serial loop”.
  • Lukas continued to work on the internals of panels for nodes and the modifier.
  • Using stable identifiers makes it possible to move simulation zones into and out of node groups without losing the cache or bake.
  • Jacques started work on data-block mapping for the modifier simulation cache, so simulation caching and baking can support materials (and other data-block references like other objects, eventually).
  • As part of exploring the “node tools” project, we investigated the first step of a “context switch” button for editing attributes. The prototype adds a button in the modifier that can switch the active object and mode.
  • The extrude node and a few other nodes are slightly faster with multithreading in part of the process.

Meeting Topics

  • Will it be possible to use node group operators on data-blocks that aren’t assets?
    • Not requiring groups to be assets is theoretically possible now that the asset traits are stored on the node tree itself. But it’s not clear where they would be added in the menus, without catalogs.
    • Without that solved, we will require assets for now.
  • #109610: WIP: Geometry Nodes: Points to Curves node
    • This node converts a point cloud to curves, sorting points and grouping them into curves.
    • Should this also do point sorting within each curve?
      • Yes, that doesn’t hurt, and it’s intuitive and should be faster to do that at once rather than separately.
    • How does this relate to a more general “Sort Elements” node?
      • Sorting generally makes sense, and there isn’t much of an argument against it.
      • Sorting by Group ID would be good, so different chunks of the geometry can be sorted separately.
      • Hans can make a design task for a general sorting node.
    • Should mesh inputs be considered points too?
      • Theoretically this could work, arguments could be made in either direction.
      • However, since the “Mesh to Points” node first is “free” performance wise, we could avoid the code complexity of supporting mesh inputs for now.
  • #109965: Rotation Nodes Design / Splitting Nodes
    • To move forward with the rotation nodes we need to consider how adding a node for each operation affect’s the design of other nodes.
    • Generally we have focused on adding atomic nodes that only do one thing, since they are more flexible, composable, and generally intuitive when building a node system. Then we rely on node groups for abstraction.
    • Improving the operator to switch nodes and making it built into Blender removes some of the downsides of using separate modes rather than nodes.
    • We have been talking about changing math nodes for a while, to make them more atomic. It’s not clear how far we would go, but generally nodes that do one or a few operations on multiple types would be preferred. We won’t have time to do that for 4.0 though, unless a community member has the time to help.
    • The design task has heuristics for choosing between nodes and modes should go to the wiki. They are purposely a bit vague, since they have to be considered case by case.
  • Consistency for nodes between other editors
    • Over time, inconsistencies between node editors have accumulated, partially due to lack of developer time, and partially to avoid breaking changes.
    • Repeat zone in shaders
      • Rendering developers would know better how hard this would be. The high level code for the UI can be shared relatively easily.
    • High level inputs for materials
      • Materials should have a design more similar to geometry nodes, with a list of exposed parameters rather than the current node-editing list in the property editor.
    • Fields in materials
      • The fields concept maps well to materials, to separate values that are constant for a shader like attribute names or image sockets and varying values
    • Simulation in the compositor
      • This one is a long shot for now, though some of the recently added UI code would be helpful. Maybe it makes more sense to combine the node tree types instead.
    • Node panels is an example of a new feature that’s added equally to all node types.
  • Simulation nodes feedback
    • People want a way to force an input to not be a single value rather than an attribute field.
    • The option could be shared with node groups, where this need for “single value only” has come up before.
    • Another option would be an “Allow Attribute” checkbox in the simulation input options, but we agreed a “Single Value” option for node groups and simulation items would be most intuitive.
19 Likes

Hi, I’m curious, what happened to this? - GSoC 2022 : Soft body simulations using XPBD (Weekly Reports) - #19 by arcticblazer

Any info on the implementation of new physics solvers for rigid/soft bodies, cloth, etc. in the 4.x timeline?

4 Likes

It just wasn’t finished as far as I can tell. There are no specific plans right now, other than that these solvers should be implemented with nodes, and generalized to lower level nodes (and shipped as node groups) where possible.

3 Likes

Are there any plans for adding matrix math and operations?

3 Likes

I don’t know if it’s implied, but all this for the compositor too, please!

1 Like

Regarding this, if we generate a simple rigid body solver for colliding spheres (there are tons of examples, some by @Erindale) as soon as we reach a small amount of paerticles the solver created with nodes, even when it’s super simple and everything is simplified to simplified collissions, suffer a lot from performance, so we cannot simulate comfortably 10.000 balls.

Are there any plans to tackle this point?

I ask this because I’m all in favour of solvers created with nodes, but the current performance makes me doubt if this would be valid for a true rigid body solver, or a true fluid solver for production.

10 Likes

The performance is probably bad because we haven’t really had time to implement the proper nodes or features that would be used to implement a performant simulation. For example, the index of nearest node is a start, but we have no way to retrieve the “5 closest points” so people do things like randomly shuffling the positions to approximate that. We basically need some time in the schedule and some technical artists to work out the missing features in order to do things the “proper” way.

11 Likes

That is good to know :slight_smile:

There are probably some things that would have to be hardcoded, but if they are small bits of a solver then the node based solver approach would be the best one.

The thing is that you need more than tech artists, because a proper SPH solver, a FLIP solver or a proper Rigid Body solver or a cloth solver are not easy to develop by a tech artists, what a tech artist could develop is an approximation, but not a real solver, for that you need an specialist… but in any case you probably know that already, I was thinking out loud, and it’s great to know that the plan is to implement more complex matters :slight_smile:

Thanks as always @HooglyBoogly !!

First of all, the algorithm of use and design for users are important. Compute algorithms and complex optimizations later

IMHO not at all.

Without performance the system won’t be used at all, then no critical userbase will appear then development won’t be justified.

So first performance, then algoryhtm + ux (at the same time)¨.

1 Like

Before you optimize, it is better to know what you are developing actually works and you cover the use cases you intended to cover.
Personally, I have often been in the situation where I optimized code too early and I made it unnecessarily complicated to experiment with new functionality.

At the end, when it comes to those decisions, the developers who are working on it know the details about whether optimizations can be done and whether those should be a priority.

5 Likes

This is how geometry nodes were born… I think this is how any development works. The rounds of refactoring made over the years (!) were only made after things were set in place and bottlenecks were identified, etc.

2 Likes

Oh yes, that’s true, but we are not talking about specific optimization, but rather general optimization, in the end many many things are common to physics simulations, you could even consider a multiphysics system, but in the end things like dealing with a massive amount of elements, like 10 to 100 millions, that’s rather important, and it’s not specific to the solver but to the actual system.

It’s an equilibrium, no doubt about that, but you should have a good base to start working in a physics solver in general, no matter how complex or simple it is, if the basis is good, the system could be improved later.

Geo Nodes was born taking into account two things, performance and usability, but one came before the other, and if I recall correctly, it was performance, with barely any node one of the first measure was to deal with the execution of the tree at a good timing so it could be useful, after that, functionality was added, and after that, performance was revisited, etc…

What I’m saying is that in it’s current state (sim nodes) we can already see that there are performance problems even with super simple solvers, so we are not starting from zero, we are continuing, and what I mean is that before implementing a node based solver, performance should be revisited and improved, that is, we cannot have a proper SPH solver if dealing with 10.000 points is slow, because if we have the solver but we cannot have a good performance dealing with a simple simulation it will be a failure, if we have the performance and then a limited solver, it will already have use cases.

I hope I clarified what I meant, it’s an equilibrium, I’m just giving a heads up that in the current state we require more performance for any proper solver :slight_smile:

4 Likes