2020-09-03 - Particle Workshop

Present: Brecht Van Lommel, Dalai Felinto, Jacques Lucke, Ton Roosendaal.
Workshop in Amsterdam from August 31 to September 11.

Dynamics Modifier

A modifier to handle more complex interaction. The logic is built with a node graph, owned by the modifier.


Dynamics Modifier inputs and outputs

The modifier input and output are explicit nodes that represent the flow of geometry in the modifier.


High level abstraction

Users should be able to use the system in a high level. Hand picked parameters are then exposed in the nodes.


The the building blocks of the system are kept inside the main node groups, abstracted away but for advanced usage. Properties connected to the Modifier Input can be exposed in the modifier stack.

Data flow

The high level nodes operate in a clear dataflow of geometry. For events there is a need for a different representation for its callbacks. A few examples of design possibilities can be found in the UI Workshop notes.


Often, the simulation effects require a different input and output geometry types. For example, a spray gets the nozzle as input, and output foam particles.

Solver Node

The solver node requires new kind of input, the influences.

The geometry is passed to the solver as the initial points (for the particles solver). The solver reaches out to its influences that can:

  • Create or delete geometry
  • Update settings (e.g., color)
  • Execute operations on events such as on collision, on birth, on death.


In this example the callbacks are represented as vertical lines, while the geometry dataflow is horizontal.

Emitter Node

The emitter node generates geometry in the simulation. It can receive its own set of influences that will operate locally. For instance, an artist can setup gravity to only affect a sub-set of particles.




  • Campfire with sparkles and some smoke
  • Walking footsteps in snow
  • Walking Dust puffs on steps
  • Shaman cloth
  • Sintel hairsim

Cosmos Laundromat

  • Swirl effect in washing machine
  • Sheep fur sim (also flock)
  • Tornado setups (advanced)


  • Breath vapour
  • Smoke sims for alpha monsters (advanced)

Random ideas

  • Animated creature duplication paths over animated body
  • Melting (animated) objects
  • Moss on trees
  • Bark on trees (or is that shader?)
  • Basic water sims (fountain, drop brick in water)
  • Hair spray


  • Hair and fur systems

Next Steps

Define the subset of nodes required for the basic examples. Starting with the effects that can be done with particles, with a particular emphasis on object scattering for set dressing.

Start the UI changes to handle the different types of dataflow.

Implement the nodes required for each supported case, and re-iterate from there.


I think, all modifiers in Blender must be “dynamics”. And any modifier must have possibility to change input object type. For example, on input of “dynamic” modifier going a point cloud, and on output - other transformed point cloud or curves (hair) or a geometry (particles) or (!) set of other objects.
That is true “blending”.
And huge amount of internal Blender processes must be reorganized to such modifiers for flexibility. For example, text is group of curves, but the process of transforming it to geometry must be modifier. For other example - instancing must be modifier, what transform set of point (point cloud, mesh’s point, particles e.t.c.) to geometry or to any different object type.
Now most of such internal Blender’s processes work only in narrow special cases. If we reorganize it to “dynamic modifier” with user’s interaction and allow users add this modifiers to any place in chain of objects generating, then flexibility of Blender grow on ~400%.


Will it be possible to use a collection for the geometry input so multiple objects can be affected easily?

For example, when I build a custom bend deformer, and want to deform a complete collection with mutliple objects at once. I know, that is not particle specific but will be handy, when trying to apply a particle effect on multipe objects.


hope there will be some video material after the workshop :slight_smile:


is the new bullet planned to be wrapped as a simulation type as well?

this would pretty much pin down interactive mode eh?

also what about their volumetric soft body sim capability?


To the random ideas I would add also these three:

  • A bunch of marbles colliding and falling into a cloth

  • A wall being broken and destroyed

  • And if it’s possible and under this scope, some sand falling, or being moved by a character


Do we have a way to represent logical loops? (for/while/each/etc)

Main Issues with the Design (Solver Node & Callbacks):

I am trying to see the purpose of a solver node and to me it seems like a rather bizarre, closed-ended and difficult to grasp concept. A massive Uber node that just encapsulates this kind of functionality and multiple steps seems like a limitation as well.

I think that since everything nodes works on such a large amount of data it should have a heavy focus on data oriented design (DOD) rather than object oriented programming concepts (OOP), not only to help performance but to ease up pains and limits in scalability. In my opinion OOP concepts like callbacks and events should be avoided at all cost. After all the entire idea surrounding these callback concepts is to conditionally check when data should be processed, they gain in some intuitive senses (when this: do that) but they sacrifice simplicity, linearity and complicate order of operations to achieve this. In addition, once you introduce this branching and start to really break the linear flow of logic, the more complicated a node system becomes and the more overhead it might cause to debug why an event isn’t working properly, or where the flow is going (break points?).

While small conditionals are obviously unavoidable Data Oriented Design tries to avoid these kinds of many branches in high performance functionality and instead focuses on working on data as linearly as possible with the idea of “Existential Processing”: This basically states that if data exists: it should be operated on. This might sound like a head scratcher but think of it as a process creating a list of jobs for another process to consume. If the list is empty then the cost of the task is basically 0. The task (or another intermediate one) can also sort/prune/duplicate/etc that data in whatever manner it chooses to achieve certain effects, be more efficient or enforce a certain order. These deferred insights are things that would be often be required in any complex system and would have to be introduced as a work-around for any immediately callback based system. Therefore I strongly encourage the design to focus on Existential Processing.

Creating/transforming a list of data to operate on is actually way more intuitive/scales up with larger amounts of data (eg. parallel processing continuous vertices on a million poly mesh) and debuggable (spreadsheets vs. breakpoints) than demanding some chain of logic to be executed immediately outside of the scope of an encapsulated task. This isn’t a matter of just UX design differences, this is the life and death of the everything nodes initiative. Not joking when I say I would rather have 3.0x pushed back than going ahead with creating potentially broken infrastructure (you can’t undo that). If you used any sort of object oriented game engine logic system to determine what might be “designer friendly/familiar” keep in mind that these systems are built on top of very old OOP engine architecture to apply complex (often hacky) single threaded behavior on a small number of/one interactable object(s). (Many of these companies are already investigating breaking out into DOD solutions) In CG we are dealing with specialized set pieces that can be broken down into layering many simple/black-boxed operations with massive amounts of data needs to be processed to give the appearance of complexity. With the type of mass data these VFX systems work on it usually ends up being much closer to manipulating the contents of databases than manipulating a game object. Creating a “silver bullet” that can support both of these scenarios should not be attempted, but it is certainly easier to scale down than scale up.

So instead of creating a massive node with explicit stages that you have to latch into we break it into steps. (eg, collide/generate contact constraints, solve, etc.) Events would be unnecessary as users could query/sort/prune/merge/zip/tag/create data as they need it from a task (eg, collision tag, position, etc.) and send it off to be processed in the next stage. Users could have this data merged as part of the result of a loop which solves for a particular period of time (multi-step solver nodes seem like a restrictive lack of granularity). Influences seem to be a mixed bag, while things like colliders will obviously have to be plugged in as an input to solve collisions things like forces and applying scene gravity should be another node. The less black-boxed the solution is the better. Higher level node groups can be created for the average user, but the advanced user should be allowed to customize virtually anything.

You can find example of this kind of data oriented design here: https://rasmusbarr.github.io/blog/dod-physics.html (physics) https://www.dataorienteddesign.com/dodbook/node1.html (general)


+1 for what astrand said.

This is also the way VEX works. Which is just a data processing language but the foundation of all nodes and functions in houdini.

“Events” as a type isnt needed if if you have simple and strong processing logic of common data types.

1 Like

Thanks for the feedback. I am a big supporter of DOD, most of Blender was designed with that concept in mind. What’s bothering me though, is that the advanced complexity we want to achieve might not be well possible then.

Can you point at node systems for effects or physics (or logic) that use a data centric flow?
Note: the “solver” here is like the big kettle you throw in everything and cook it :slight_smile: Dynamics by nature is a complex system, not a linear flow. However, you do want to control what’s going to be in one kettle, and have many other pots on the fire.


This is not actually true. Many nodes in Houdini are not implemented in VEX, and I’m quite sure most of the dynamics nodes are not.


I think it kind of muddies the waters a bit to hold up VEX as a paragon of DOD that should be emulated for something as complex as a particle system.

VEX is almost exclusively tasked with operating on geometry. It’s pretty trivial to create a DOD operational model for something as non-complex as a vertex. You can easily represent the vertices in geometry as an array. That’s what .obj is. :slight_smile:

When your objects have about three potential properties and no intrinsic functions they need to perform independently of one another, you can safely store them in a series of arrays and iterate through them using generalized functions.

You’d have a much harder time rolling a purely DOD approach to heterogeneous objects that not only occupy a position in space, but also, have unique properties like mass, momentum, temperature, surface tension, and potentially trigger events from interactions among themselves. The DOD approach to solving this becomes convoluted and messy very quickly.

An OOP approach here, gives you the benefit of allowing different types of objects to interact with each other. It also lets you generalize the work the solver does, since it would let you define more of the object properties and behaviors in the object itself, with the solver primarily focusing on shared resource management and traffic control.

1 Like

Every element fed into a solver can have varying properties such as different positions, orientations, mass, velocity, etc. this can easily be iterated over by a solver. What is a problem is if you are iterating over polymorphic objects and suddenly one doesn’t even understand the concept of mass or velocity, then you have to come up with a solution on the fly which complicates code (In some OOP logic graphs casting is a common nightmare). I understand that in Blender objects are basically containers that store different types of data in a very object oriented manner but when a system asks for just a series of transforms, bounding boxes, control points, or wants to enforce a certain input (poly meshes, points, etc). This might be better left off to a system that can query the object containers for necessary data and implicitly convert (would be nice to notify the user) or even enforce user specified defaults. Also a node might want to run some logic to determine where it and other nodes are in the node graph tree and the values that exist on other objects outside the scene (basically like drivers that search in relative paths) In those cases OOP can absolutely be used to solve such problems at a macro level.

When I said branching I think I may have overgeneralized or misused the term. I should have clarified my position that when you find yourself in the position that you introduce callback functions into a node you have a problematic lack of granularity. The callback is just a way to interrupt a process to ask the user “can I get you anything?”. The user should have more granular control over the process if they wish to dive deep enough so they can modify a solver to their need. Yes branching nodes if certain conditions are met whether on a scalar input level to inside a loop should be provided to the user. This sort of branching will be much more easy to debug as they have explicitly authored the conditions and the iteration that produces the branch and aren’t tapping into a unknown box of functionality waiting for a turn.

1 Like

Thank you for your time!

For the solver kettle:
The current node group design should work in my opinion if access to more granular building blocks is provided. Say providing a propper range of solver functionality from low level steps such as advection, contact generation, constraint solving, ect. (steps that are currently hard-coded into blender’s simulations) and also provide them put together properly in high level template nodes. Let’s say if the user just wants a a stack of cubes and spheres on top of a plane falling with an animated gravity property they can using nodes:

  • assign the plane, spheres collection, and cubes collection default mass/friction/shape/etc (helper node)
  • create a rigid body scene solver node group (this is the high level template)
  • plug their collection of spheres + cubes into the dynamic body slot
  • plug the ground plane into the static body slot
  • add either a reference to the scene gravity attribute or a key-framed value node into the gravity slot (maybe there can be a defaults system)
  • take the template output (which is the input objects that have the updated transformation matrices applied after simulating the current frame (or defined amount of time/steps) and pipe it to the output of the system (since we are basically dealing with something that modifies the scene state and not just a single mesh’s geometry the scope of such a node system is still something to be addressed in the design.)

But let’s say that the user decides that they want sparks and debris to fly whenever an object collides with another object… Well currently Blender doesn’t support this, and let’s say that the node template wasn’t designed for this either. Currently you would need to hack around this with dynamic paint or something or modify blender’s code, but with this node group you could simply dive in a level and inspect the (well framed/commented/laid out) steps of the rigid body solver… Ah there it is! The contact points output of the low level collision detection node, inspect it in the spread sheet and you see a list of positions, object pairs, hit normals, etc. You take the the wire and send it into an output socket. Now that you have this list of points you can work with it and cull out data points that have too little impact intensity or don’t have an object with a metal material in the pair. Now that you have a list of contact points you can do all sorts of things with it, you can just plug it into the “initial points” socket of that spark particle effect you downloaded from the asset manager, or you can go wild and put it in a cache node to merge it with last frame and use it for instancing ice crystals, rocks, flowers, etc. Maybe even hop on the forums and suggest this part of the simulator be exposed, and with no C/C++/Python code the user’s can expand the possibilities of the solver without having to beg programmers for access to certain callbacks.

Going this route everything nodes should be a LONG term goal but will give a lot more node functionality (and possibly hundreds more nodes) introduced into blender, but I think it will all be worth the growing pains for the flexibility it unlocks.

For examples: I think the kind of thing that everything nodes is most close to in what it tries to achieve might be an Entity Component System (ECS). While not always exposed as a node system (although it absolutely can be if you visualize it as such) it certainly achieves the ability to create complex customizable simulations. Where it might differ is in that some ECS implementations might go beyond simulations into general purpose software architecture.

While I cannot point to any closed/commercial software that tries to do what we are attempting, I think looking at an ECS might give good insight. An excellent Open Source ECS to look at for inspiration is FLECS. It has already solved many of the challenges in creating a general purpose simulation system. Although many of it’s solutions might be overkill (such as notifications on component/entity life-cycle changes/prefabs/snapshots/etc) or not easily applicable for what blender needs, The core philosophy and functionality of it should show how an extendable data oriented system could tackle things from particles to other general purpose applications.

Thanks again!


Suggestions regarding this graph:

  • Remove Events socket, and instead add a Particle Influence node for creating custom influences. This node could have sockets On Birth, On Every Time Step, On Collision and On Kill for event handling. The rationale is that Events + Actions bundled together in a node group are how you make custom influences. So once those are grouped together, you want to be able integrate them at the same level as builtin influences and the fact they do things on certain events becomes an internal detail.

  • Emitters and Influences must be exposable as sockets in group nodes, for the purpose of creating node group assets for custom emitters and influences. If these are special sockets that draw on the bottom/top of nodes, the same must be done for group, again to avoid distinction between builtin and custom nodes.

  • Remove name “Dynamic Properties”, always call it “Influences” since it appears to be just another term for the same thing? At least that is my understanding from the diagram, that emitters can have a set of influences that only apply to particles emitted from that node.

  • Remove the term “Geometry Callback”, “callback” is too much of a programming term and influences are also “geometry callbacks” so the distinction is unclear. Just call it Emitters or Sources.

  • The “Add Mesh” node is unclear. We should support instancing meshes on points, but that mesh should be associated with a point throughout the simulation, so that the solver still strictly outputs points only and so that the mesh has a specified lifetime and can be transformed over time. An “Add Mesh” node group could be created, but internally I think that would consist of an “Emitter” and an “Instance Mesh Influence”. The “Instance Mesh Influence” would set point attributes to specify which mesh to use and its transform, and the Blender dupli system would then be able to use those attributes to create instances.

On a technical level, “Geometry Callbacks” and “Influences” can be functions that take a pointcloud as input and output a modified pointcloud. If we ever want to support making custom node based solvers, this system is entirely compatible with that. You could have an “Execute Influences” node that applies a set of influences to a pointcloud to modify it. But in practical terms, the scope of the project must kept under control and custom solvers add too much complexity for the first few iterations of this system.


Okay but why Solver should be this big thing in the bottom? I mean material output and composite node are similar concepts. Why can it look the same? A node of the far right. With inputs.

Also about the dynamics effectors inputs. Why can’t we use a mix node? To control how much influence one is over the other. Or use texture to mix them. So you input only one thing.
Also you can make the dynamic effector nodes them self as some kind a mix node. To which you plug from the back the previous effector.


The design is still ongoing. As you can see from the current iteration, there is no more monolithic solver node.


This smells so very good

1 Like

Thats really exciting guys!

Where are the settings of the emitter or the solver though? Will they be put directly ON the node or do we have to select them and find the settings in the N-panel? If yes, maybe it’s time to rethink the place for those settings because the no one will figure out where those settings are buried (like with the alpha settings of the texture node in the shading editor).

Maybe make a prominent node settings column on the right side of the node editor where the user will see the settings immediately after clicking on a node?


There is one thing I don’t understand, why there has to be a closed end-point?

I mean the result of a simulation is just a bunch of data, if you don’t use that data, ok, but what if I want to use the motion vectors of a flip simulation to advect some granular simulation and some cloth simulation.

It’s the only thing I don’t see clear, I always see an “end” point, and I don’t think there should be an “end point” just a step that could be the end or not, depending on your needs.


I have to admit, my monkey brain likes to have an “output” node, I can always tell what the chain of events is actually triggering vs unused nodes just left in the graph. I agree, however, that I like to have attributes accessible at any point in time, but I think just because there is an “end” doesn’t necessarily mean you can’t extract what you need, especially since this is WIP.

1 Like