2020-09-03 - Particle Workshop

+1 for what astrand said.

This is also the way VEX works. Which is just a data processing language but the foundation of all nodes and functions in houdini.

“Events” as a type isnt needed if if you have simple and strong processing logic of common data types.

1 Like

Thanks for the feedback. I am a big supporter of DOD, most of Blender was designed with that concept in mind. What’s bothering me though, is that the advanced complexity we want to achieve might not be well possible then.

Can you point at node systems for effects or physics (or logic) that use a data centric flow?
Note: the “solver” here is like the big kettle you throw in everything and cook it :slight_smile: Dynamics by nature is a complex system, not a linear flow. However, you do want to control what’s going to be in one kettle, and have many other pots on the fire.

8 Likes

This is not actually true. Many nodes in Houdini are not implemented in VEX, and I’m quite sure most of the dynamics nodes are not.

5 Likes

I think it kind of muddies the waters a bit to hold up VEX as a paragon of DOD that should be emulated for something as complex as a particle system.

VEX is almost exclusively tasked with operating on geometry. It’s pretty trivial to create a DOD operational model for something as non-complex as a vertex. You can easily represent the vertices in geometry as an array. That’s what .obj is. :slight_smile:

When your objects have about three potential properties and no intrinsic functions they need to perform independently of one another, you can safely store them in a series of arrays and iterate through them using generalized functions.

You’d have a much harder time rolling a purely DOD approach to heterogeneous objects that not only occupy a position in space, but also, have unique properties like mass, momentum, temperature, surface tension, and potentially trigger events from interactions among themselves. The DOD approach to solving this becomes convoluted and messy very quickly.

An OOP approach here, gives you the benefit of allowing different types of objects to interact with each other. It also lets you generalize the work the solver does, since it would let you define more of the object properties and behaviors in the object itself, with the solver primarily focusing on shared resource management and traffic control.

1 Like

Every element fed into a solver can have varying properties such as different positions, orientations, mass, velocity, etc. this can easily be iterated over by a solver. What is a problem is if you are iterating over polymorphic objects and suddenly one doesn’t even understand the concept of mass or velocity, then you have to come up with a solution on the fly which complicates code (In some OOP logic graphs casting is a common nightmare). I understand that in Blender objects are basically containers that store different types of data in a very object oriented manner but when a system asks for just a series of transforms, bounding boxes, control points, or wants to enforce a certain input (poly meshes, points, etc). This might be better left off to a system that can query the object containers for necessary data and implicitly convert (would be nice to notify the user) or even enforce user specified defaults. Also a node might want to run some logic to determine where it and other nodes are in the node graph tree and the values that exist on other objects outside the scene (basically like drivers that search in relative paths) In those cases OOP can absolutely be used to solve such problems at a macro level.

When I said branching I think I may have overgeneralized or misused the term. I should have clarified my position that when you find yourself in the position that you introduce callback functions into a node you have a problematic lack of granularity. The callback is just a way to interrupt a process to ask the user “can I get you anything?”. The user should have more granular control over the process if they wish to dive deep enough so they can modify a solver to their need. Yes branching nodes if certain conditions are met whether on a scalar input level to inside a loop should be provided to the user. This sort of branching will be much more easy to debug as they have explicitly authored the conditions and the iteration that produces the branch and aren’t tapping into a unknown box of functionality waiting for a turn.

1 Like

Thank you for your time!

For the solver kettle:
The current node group design should work in my opinion if access to more granular building blocks is provided. Say providing a propper range of solver functionality from low level steps such as advection, contact generation, constraint solving, ect. (steps that are currently hard-coded into blender’s simulations) and also provide them put together properly in high level template nodes. Let’s say if the user just wants a a stack of cubes and spheres on top of a plane falling with an animated gravity property they can using nodes:

  • assign the plane, spheres collection, and cubes collection default mass/friction/shape/etc (helper node)
  • create a rigid body scene solver node group (this is the high level template)
  • plug their collection of spheres + cubes into the dynamic body slot
  • plug the ground plane into the static body slot
  • add either a reference to the scene gravity attribute or a key-framed value node into the gravity slot (maybe there can be a defaults system)
  • take the template output (which is the input objects that have the updated transformation matrices applied after simulating the current frame (or defined amount of time/steps) and pipe it to the output of the system (since we are basically dealing with something that modifies the scene state and not just a single mesh’s geometry the scope of such a node system is still something to be addressed in the design.)

But let’s say that the user decides that they want sparks and debris to fly whenever an object collides with another object… Well currently Blender doesn’t support this, and let’s say that the node template wasn’t designed for this either. Currently you would need to hack around this with dynamic paint or something or modify blender’s code, but with this node group you could simply dive in a level and inspect the (well framed/commented/laid out) steps of the rigid body solver… Ah there it is! The contact points output of the low level collision detection node, inspect it in the spread sheet and you see a list of positions, object pairs, hit normals, etc. You take the the wire and send it into an output socket. Now that you have this list of points you can work with it and cull out data points that have too little impact intensity or don’t have an object with a metal material in the pair. Now that you have a list of contact points you can do all sorts of things with it, you can just plug it into the “initial points” socket of that spark particle effect you downloaded from the asset manager, or you can go wild and put it in a cache node to merge it with last frame and use it for instancing ice crystals, rocks, flowers, etc. Maybe even hop on the forums and suggest this part of the simulator be exposed, and with no C/C++/Python code the user’s can expand the possibilities of the solver without having to beg programmers for access to certain callbacks.

Going this route everything nodes should be a LONG term goal but will give a lot more node functionality (and possibly hundreds more nodes) introduced into blender, but I think it will all be worth the growing pains for the flexibility it unlocks.

For examples: I think the kind of thing that everything nodes is most close to in what it tries to achieve might be an Entity Component System (ECS). While not always exposed as a node system (although it absolutely can be if you visualize it as such) it certainly achieves the ability to create complex customizable simulations. Where it might differ is in that some ECS implementations might go beyond simulations into general purpose software architecture.

While I cannot point to any closed/commercial software that tries to do what we are attempting, I think looking at an ECS might give good insight. An excellent Open Source ECS to look at for inspiration is FLECS. It has already solved many of the challenges in creating a general purpose simulation system. Although many of it’s solutions might be overkill (such as notifications on component/entity life-cycle changes/prefabs/snapshots/etc) or not easily applicable for what blender needs, The core philosophy and functionality of it should show how an extendable data oriented system could tackle things from particles to other general purpose applications.

Thanks again!

3 Likes

Suggestions regarding this graph:

  • Remove Events socket, and instead add a Particle Influence node for creating custom influences. This node could have sockets On Birth, On Every Time Step, On Collision and On Kill for event handling. The rationale is that Events + Actions bundled together in a node group are how you make custom influences. So once those are grouped together, you want to be able integrate them at the same level as builtin influences and the fact they do things on certain events becomes an internal detail.

  • Emitters and Influences must be exposable as sockets in group nodes, for the purpose of creating node group assets for custom emitters and influences. If these are special sockets that draw on the bottom/top of nodes, the same must be done for group, again to avoid distinction between builtin and custom nodes.

  • Remove name “Dynamic Properties”, always call it “Influences” since it appears to be just another term for the same thing? At least that is my understanding from the diagram, that emitters can have a set of influences that only apply to particles emitted from that node.

  • Remove the term “Geometry Callback”, “callback” is too much of a programming term and influences are also “geometry callbacks” so the distinction is unclear. Just call it Emitters or Sources.

  • The “Add Mesh” node is unclear. We should support instancing meshes on points, but that mesh should be associated with a point throughout the simulation, so that the solver still strictly outputs points only and so that the mesh has a specified lifetime and can be transformed over time. An “Add Mesh” node group could be created, but internally I think that would consist of an “Emitter” and an “Instance Mesh Influence”. The “Instance Mesh Influence” would set point attributes to specify which mesh to use and its transform, and the Blender dupli system would then be able to use those attributes to create instances.

On a technical level, “Geometry Callbacks” and “Influences” can be functions that take a pointcloud as input and output a modified pointcloud. If we ever want to support making custom node based solvers, this system is entirely compatible with that. You could have an “Execute Influences” node that applies a set of influences to a pointcloud to modify it. But in practical terms, the scope of the project must kept under control and custom solvers add too much complexity for the first few iterations of this system.

8 Likes

Okay but why Solver should be this big thing in the bottom? I mean material output and composite node are similar concepts. Why can it look the same? A node of the far right. With inputs.

Also about the dynamics effectors inputs. Why can’t we use a mix node? To control how much influence one is over the other. Or use texture to mix them. So you input only one thing.
Also you can make the dynamic effector nodes them self as some kind a mix node. To which you plug from the back the previous effector.

2 Likes

The design is still ongoing. As you can see from the current iteration, there is no more monolithic solver node.

6 Likes

This smells so very good

1 Like

Thats really exciting guys!

Where are the settings of the emitter or the solver though? Will they be put directly ON the node or do we have to select them and find the settings in the N-panel? If yes, maybe it’s time to rethink the place for those settings because the no one will figure out where those settings are buried (like with the alpha settings of the texture node in the shading editor).

Maybe make a prominent node settings column on the right side of the node editor where the user will see the settings immediately after clicking on a node?

2 Likes

There is one thing I don’t understand, why there has to be a closed end-point?

I mean the result of a simulation is just a bunch of data, if you don’t use that data, ok, but what if I want to use the motion vectors of a flip simulation to advect some granular simulation and some cloth simulation.

It’s the only thing I don’t see clear, I always see an “end” point, and I don’t think there should be an “end point” just a step that could be the end or not, depending on your needs.

7 Likes

I have to admit, my monkey brain likes to have an “output” node, I can always tell what the chain of events is actually triggering vs unused nodes just left in the graph. I agree, however, that I like to have attributes accessible at any point in time, but I think just because there is an “end” doesn’t necessarily mean you can’t extract what you need, especially since this is WIP.

1 Like

But it exists !!!

Maybe, I just wanted to make this clear, the importance of being able to access data of ANY result or part of the tree, not just pre-defined things :slight_smile:

5 Likes

I am still rather concerned about this. The idea that a simulation can trigger many callbacks that immediately executes a chain of logic that can do things like adding new meshes to the scene sounds like it will create a debugging nightmare. The user should be able to deferred batch process these contact points for a specific itteration and say trim them out feed them into the initial points of the another solver/instancer/frame feedback loop instead of doing work on them one by one immediately. Also, if we go the route of events without a robust debugging solution they will be impossible to work with at scale. Even then existing solutions for debugging this type of logic will not scale… Having to breakpoint millions of particles is not a realistic option.

Noob question here regarding this same “On Collision”, for a while I was looking for ways to make “load/play a sound on dynamics collision” type of scenario.

Say a bunch of cubes colliding with the ground and producing “impact” sounds for each “registered” collision, and the answers were:

  • somebody have to make an add-on for it.
  • “fake” it in post.
  • other obscure shenanigans…

But now I see this “On Collision” -> “add mesh” and “kill” and my brain is screaming “play sounds” “do something with the 3d speaker” “add some noise”, should I hold my breath about this “FINALLY” becoming a reality or should I just forget about it ?

Also, is there any plans for any kind of crowd simulation built in blender ? (logic, agents, behaviors, etc…)

3 Likes

That’s an interesting, REAL 3D sound creation capability. Maybe even play random sound from a group of sounds.

1 Like

Another thing related to Voidium’s comment. The user’s idea “on-collision: play sound” seems rather intuitive right?

Well, this is actually quite a deceitfully complex problem. You probably really don’t want every rigid body to produce a sound whenever the physics engine signals it makes contact with something using this niave approach… Not unless you want to blow out someone’s headphones. (couple spheres = dozens of contacts every frame, 24 FPS, you get the picture)

In an interactive application which has (properly has) physics with interactive audio often there is a queue system which keeps a running list of contacts which have been sorted and culled by audible relevance and their lifetime during an interaction (hit, rolling, sliding, etc) which then manages streaming in and mixing the audio.

So in this case a point cloud of contacts points/pairs can still make a much more effective input to say a community (addon?) “3D Physics Audio” node than the event system as it has more insight into the broader picture of the resulting simulation.

3 Likes

Any nodes here will only be able to modify the pointcloud, not directly add anything to the scene.

Putting an Add Mesh into this graph is not the best example, but it is something that I expect will be possible through instancing on points as described in my previous comment.

In theory, if particles can instance objects, they can also instance speaker objects that play a sound. Or they could set some attribute on the points, which then later is used by modifier nodes to generate speaker objects. I don’t expect it to work in the first release of particle nodes though, but it should be compatible with the design.

Implementing a crowd simulation system is not a priority. Some basic crowd simulation may be possible to get working with the available particle nodes, but for native Blender support I would not expect it to happen soon, improving and nodifying other types of physics simulations will have higher priority.

5 Likes