Particle Nodes UI

This thread can be used to discuss the high level particle nodes user interface. If possible, discussions about individual features should be kept separate.

The main goal is to figure out how particle-nodes-node-trees should look like and how they control the behavior of particles. I welcome all kinds of mockups of possible node trees that achieve certain particle effects. If the same structure could be used for other kinds of simulations, even better! Most importantly, the node tree should be easy to read/understand, but also provide a lot of flexibility.

Please don’t just provide screenshots or video links, but also explain how it works in a concise way.

I wrote two documents on the topic already:

Furthermore, there are two videos showing the original proposal in action:

More detailed proposals for entire workflows should be made in separate documents and linked here.

[EDIT] Latest proposal:


If I can afford to give you some advice, it is to create a set of small custom preset groupings of nodes by way of example that can also be linked together to obtain different effects.

A series therefore of examples and at the same time quick compositions already ready for use …

1 Like

We will have that in any case at some point, I have no doupt about that.


The new proposal doesn’t strike me as quite the right solution. The key benefit of a node system is that you get a very clear representation of the flow of information, simply having several nodes refer to a single name, even if assisted with colour, removes a lot of that benefit.

The key problem you identify is the Particle Type node, because it has many potential inputs on the left, and many potential outputs on the right. One solution would be to split this node in two, so the inputs are in one, and the outputs are in another.

You’re still splitting up the tree flow, but you’re doing it in one place.

It may still be preferable to make the links explicit:

Or maybe the problem would be best solved with a new node layout option which functions kind of like this (which would benefit complex shader node setups as well):


Node instancing could help. One node could be instanced in several places in the nodetree and connect to different inputs eliminating the long reroute lines. Changing options on one instance changes all of them.

Instanced reroute node connected to distanced parts of the nodetree could be a neat trick


Thanks for the suggestion. Some notes:

  • It partially solves the tree growth problem. Not entirely though, because many of the “subtrees” grow to the left and to the right.
  • The problem with node groups that contain e.g. an emitter and an event is not solved by this at all.
  • When nodes influence multiple Particle Types, there would be potentially many long links.

I think my solution is still better, but we need to find better ways to visualize the relations between Particle Type nodes and nodes that influence the behavior.

Here is an example made by @ixd, that provides good clarity, but also keeps the benefits of separating the nodes:

Not saying that this is the best solution, but it could work quite well and might be worth trying.

From what I’ve seen in screenshots, in Unitys particle system a similar problem is solved by having a “frame” around the nodes that influence the behavior of a specific particle.
That could also work in Blender, but it does not work well when we want e.g. force nodes that an option to influence all particle types.

1 Like

Implementing this should not be too hard. However, I think this should be the last option, if nothing else works. After all, these links do have a meaning, and just hiding them with such a feature gives a false impression of what a node tree is doing.

1 Like

I’m not bothered at all about node trees growing. The visual clarity of node connections is still superior than manually checking and comparing names from dropdown boxes. When node trees become long, a reroute node is used to fix the excessive length of the tree. When a node tree with these dropdowns becomes very long, you’ll have no idea where to look for that Type 2 Particle type.

Another thing that really bothers me about the current design is that it violates a core principle of all other node trees in Blender: The fact that a socket can strictly only have one single input:

I’d rather suggest using the same design as in Shader and Compositor node trees - a mix_forces node:

Besides the visual clarity it offers, it also allows for a controller (Fac) socket to be implemented. The Voronoi input I put there as a teaser would give a hint on how flexible this could become. If that’s not the design that force fields can follow, the other option is daisy-chaining the nodes, like we have to with bump maps:


Besides, bigger node trees can also be avoided by adding more than one BParticle modifier to the object I suppose? Put the main effects in the first one, and the stuff that is supposed to act on top of the resulting points in the node tree of a subsequent modifier.


Having only one socket connected to an input is not a core principle of Blenders node trees in general, but a core principle of data flow node trees (which all of Blenders node trees are currently). However, not everything is or should be data flow.

A mix forces node could work in the current design. However, I’d prefer to make this mixing at a different level: I want users to be able to mix Falloffs that plug into forces. I tested this approach in Animation Nodes and I think it works really well. It allows for the same things but has much more general applicability.

I’m not a big fan of chaining forces, because forces don’t really have an order.

I hope, that we get the simulation types, including particles, out of the modifier stack. It is not clear yet how that will work. However, I’m 100% sure it will be possible to have multiple such node trees for different particle types.


As @jacqueslucke is pointing out, I think it’s important to mention that a node-system for particles must be inherently different from shader nodes in a few ways.

Shader nodes & comp nodes are data-flow node-trees, with one node passing the data to the next.

Particles are more like events and logic, and in many ways is more like the kind of node-system you’d need for game logic than materials. When you simulate the particles, you ‘run’ the particle nodes logic.

I expect that modifier nodes will more closely resemble shader nodes, because they would also be data-oriented probably.

Just important to mention this.


This means that in combination …
particle nodes manage particle dynamics while shader nodes, material and textures …
is some kind of bridge not useful?

procedural textures come to mind that behave like data emitted by the nodes particles

After seeing the videos and some info (not the actual docs yet, sorry) I see what I think it’s a big problem.

There is no execution order or flow order, everything happens without any control all at once (we know is not at once, but there is no control from the user to define the execution order)

I think a flow control node, something like a “flow set” where you work with a bunch of nodes that you know it would be executed first and linearly, of course you can have “exterior” groups that could be executed in parallel and such things, but at least we would have a clear idea of what’s happening and in what order.

Another thing I would totally avoid is using names to be used as references, I think a node that can reference another node group (or particles system) would be better, “strings” are not a good idea in general, having to check or remember names is not good.

Finally, I have not tried the build yet, I’m after doing so these days, but I want to stress out the importance of being able to handle several million particles, right now I’m working on a scene of sand were we have 20 millions and it’s not enough, we would need 100 millions, even simulating 20 millions inside Blender right now is totally impossible, but it’s totally required.

I’ll do more comments as soon as I try the branch and carefully read the docs :slight_smile:

In case, particle type node are just a sum up without inputs or outputs sockets :
we could present those sum-up into another view of editor made of same number of columns than number of present particle types.
Top of this Sum-up View (containing Particle Types Names) could continue to be displayed into Nodes View and be used as horizontal tabs to highlight, select corresponding nodes.

In case that particle types node has sockets :

  • if one connection per socket paradigm is used, we could image that kind of node (like Group Input, Group Output or Lists) being able to have an unlimited number of sockets.
  • if it has a limited number of sockets, they have to support multiple number of connection to them. It also means that reroute nodes has to do the same and be able to support multiple connections. You were not able to make a clear tree because without this ability : reroute are useless.
  • if it has a limited number of sockets and reroutes nodes behavior is still limited to one connection, particle type node could have a completely different look made of concentric circular areas corresponding to its 3 sockets. That would allow connections from anywhere.

What if you made the connection between the two particle nodes (in AndrewC’s images) only visible when highlighting and/or interacting with one of them (i.e. moving one around), and then make the connection line disappear once you do interact with some other node (the inputs and outputs will need clear, distinct colours to show if they’re connected or disconnected)? That way you can still have a clean workspace with no lines going all over the place and be able to check which node connects to which by just interacting with the node/nodes.

1 Like

Does this not work the same as Animation Nodes, I ask because you achieve the “flow” by the connections, here in this example, I don’t know the execution order of the two expression nodes:

Whereas in this one I do; left to right in this case, as they are connected:

So for the first I can do this, for example, to force an order:

Just asking if this is the case…

Cheers, Clock.

Yes, but if you have two different systems (for example I have some AN tools for distribution) you cannot know what tools are going to be executed in which order.

In this case if you have two separate particle systems you don’t know which one is going to be first and which is going to be second.

I find important to have some kind of flow control, I cannot remember now the specific case, but at some point in the past I stumbled upon a case where not having execution order was generating some generated vertices to go one frame behind the other generated geometry that I had animated with AN (I mean that I cannot exactly explain it, but it was a problem)

Since this design is going from the ground up, I find important to keep this in mind, having control over evertyhing is pretty important in particles, and the first thing we need to have control over is WHEN and WHAT is executed :slight_smile:

1 Like

We can hope that once AN is part of Everything Nodes, that connection will be possible within one window (the EN window) so, it should be possible to connect some Particle nodes to some Animation nodes and then flow order is possible. Without this, I see no way to force particle nodes to execute before animation nodes, when there are in different windows as is the case now from what you have said. This also will apply to changing material nodes as a result of an animation for example…

1 Like

I hope AN is NEVER part of everything nodes :smiley:

Don’t take me wrong, I love Animation Nodes, but I hope everything gets rewritten from the ground up and the majority of it in C++ so we have as much speed as possible, one of the problems with animation nodes is that in some situations it can be very slow, specially if you have to iterate over some million elements.

For example if you import an alembic file that contains a particle cache of 20 million particles you can use AN to turn those vertices in particles, but iterating over 20 million particles, even when it’s just 20 million positions, it’s super slow and inneficient, when 20 million it’s a minimum and it should go buttery smooth.

Then trying to interpolate new particles or even vertices using AN is also super slow.

So while I want the functionality (and I hope a redesigned one that I’m sure will fix some things of AN) I hope it’s completely integrated, expanded and it’s not an AN port at all :slight_smile:


I agree with what you are saying here, I was talking about the integration of AN functions into EN, not just bunging AN as is, into EN. So we would have EN with particle nodes, animation nodes, material nodes, audio nodes, MIDI nodes, composition nodes, et al one day maybe? Oh what a nice dream that is! :joy:


By way of a "flow fix’ some time ago I placed an input socket on a node that wasn’t used, a “Dummy” if you like. The effect of this was to force this node to execute after the one connected by its “Dummy” output socket. I know that sounds “hacky” but it worked in that flow order was maintained by the connection. Doing the flow order with a “Flow” node might entail far more checking and user input and might not work well. By this I mean we could have a node that took a list of all the “primary” nodes (those without inputs) and the user sorted them into an order, so they execute from top down, just a thought…

Perhaps like this:

So Obj 2 node gets executed before Obj1 node?

1 Like