Particle Nodes UI

So I’m very impressed with everything developed so far. Quick side note, @jacqueslucke your contribution to the motion designer community is legendary.

Anyway, based on my understanding of the code in Animation Nodes, the “Script” node was probably one of the more complex to design because you had to make sure that people couldn’t accidentally break blender. But how far off are we from having a script node here? Even if it can’t handle complex data types like the events etc, it would be so nice to write my own expressions/formulas vs having enormous groups for complex math/vector math. I can imagine that execution may not work the same because of the Uber Solver but is it likely to be implemented earlier or later in this development process? (Also if it already exists, please point me in the right direction)

Or is the python API able to interact with Particle Nodes in some other way?

2 Likes

Thank you for the examples. For any reason your file is not working with the current branch anymore (v2.83 from 2. Feb.). Did it change this much?

1 Like

I will have a look tomorrow…

EDIT: All four files work fine with the build from 2. Feb
All necessary groups should be embedded in the files.
https://drive.google.com/drive/folders/1WD6WSeYaU6a9TU8qV8D9gfLZNv0C3iwU?usp=sharing

Is the reason maybe, because I’m using the official branch from blender.org? The nodes you’re using are not exsisting there. Are you using the branch from https://blender.community/c/graphicall/ ?

First off, just wanted how amazed I am by all the development happening, I hadn’t had a chance to test this branch for a few months and I was impressed with all the changes, Awesome work @jacqueslucke !
Also I’m totally onboard with the idea of the nodelib, I think it opens up a lot of potential for community participation; obviously having a better way to organize the different files containing the node groups potentially via the upcoming asset library would be a huge plus, but just having a path to a central repository of custom-made node groups is already incredibly useful.
(it reminds me a bit of how Houdini does it: on startup, or using a .env file, Houdini let’s you define a few environment variables for paths to look for Houdini Digital Assets (HDAs), mainly $HOUDINI_OTL_PATH and $HOUDINI_OTLSCAN_PATH but it also looks in $HOME, that way you can have both per-user as well as studio wide asset libraries).

One thing that I think would be a super useful addition is, for the bparticles nodes that take an object input, like Mesh Emitter or Mesh Collision Event, to have an option to “use modifiers” in other words, that the node would apply the modifier at the current frame before evaluating its input.

As a very simple example of why this would be useful:
displaced_emitter

The image above contains a plane that has been subdivided and also has a displacement modifier applied. The idea is that this plane would later be used as an emitter for the particle system, however since right now there’s no way to get the result of the modifier, the particle system simply scatters new particles at the position of the undisplaced plane.
The same would happen if you tried to use the Mesh Collision Event node, the collisions would be detected in the wrong places since it’s not picking up the resulting mesh after applying all the modifiers.

My original motivation for asking for this feature is a bit more complex, but I still think it would be incredibly useful:


In the first image there’s just a simple particles system with some particles falling down, applied to the “Particles” object.
In the second image however, we can see that the “Deform Plane” object has a Function Deform modifier applied to it, and the node network for that function deform modifier is trying to use the result of the “Particle” object to deform the plane, in other words, using the output of the particle system as an input to the deform node network in order to drive the deformation based on the particles.

This might be a use case that we don’t intend to support, or it could be better achieved in some other way that better aligns with the design goals of the project (for example, keeping everything in a single node network that can run both the particle sim and the deform operation, and has multiple outputs (or conversely, the modifiers on different objects can “call” a different node from the network as their output)),
(also, linking modifiers in this way could introduce cycles into the graph, which I assume would be harder to detect than if everything where placed in a single network, it would be easier to not allow that operation or to alert the user with an error or warning).

However, I still think at least for the simple case outlined earlier, having the functionality of bringing a mesh into the network with all the modifiers applied would be beneficial.

3 Likes

Hi, I posted a tiny rant about node-UI design considerations and how implicit layout constraints arise from node in/out placement here:

It’s not exclusive to the particle nodes UI but, especially since there has been some struggle adapting the existing node-interface to meet the needs of the particle system here, it seems worthwhile re-evaluate some fundamental design conceits (ie: node graphs are always to be laid out left->right) and common causes of graph messiness (ie: the switchback/shoelace effect :yarn:)

1 Like

check this out https://code.blender.org/2020/03/feb-2020-ui-workshop/

2 Likes

That’s awesome.

Based on the way the connector icon is on the border of the node in this image, I wonder if you could just keep unconnected pins in their default position but, have connected ones move to a border position that has the shortes path to the pin on the connected node.

It seems like the sort of behavior that could be grouped among options to have curved or straight noodles.

perhaps there could be an inner border for inputs and and outer border for outputs to help quickly distinguish between inputs and outputs when looking at the node itself.

2 Likes

I’m not sure if I agree with having the “condition gate” node flow from right to left. Is it possible to maintain a single flow direction paradigm instead of a multi flow direction paradigm? I understand that similar to classic programming flow charts, condition gates need to flow/loop back to previous nodes in the chain, but they still maintain a singular flow direction at the connection inputs and outputs. I also understand that a single flow paradigm probably clutters up space on one side, but in terms of visualizing control flow in your mind, it’s easier to visualize the logic in one flow direction. Regardless of which flow paradigm is used, I think the connection icon indicating flow direction is a good idea.

Regarding the list nodes, I’ll just paste my comment from the blog post:

For the list nodes, maybe take some inspiration from the classic layering system in other graphics apps? Lists are easier to manage in a layer system. Since they are top to bottom you can just stack them in one window. Each ‘layer’ will have a corresponding node connection. I think this design would also be consistent with a future texture/painting layering system in a node system since that is also top to bottom (or bottom to top).

3 Likes

i had the same view but when i tested the branch actually it was pretty seamless, the only problem i had that i had a lots of nodes for simple effect, but since the group nodes and after seeing the futur ui design i think it will be almost perfect

2 Likes

I was thinking about the navigation problem within a large node tree .
How feasible is to make a mini map of the node tree, like the one you get in video games ?

Maybe when you hover over a socket the map area could even zoom into the node connected to it to show you a quick preview of what is connected to that socket and what settings it have? And shortcut key to center on that node that is currently previewed?
I think it could be very helpful, the only downside is screen space …

2 Likes

Well, I suppose from the user facing/UI perspective instead of execution connection being from right to left as indicative of execution order, the connection could be ‘requesting execution readiness’ thus being from left to right similar to data flow. But I’m not sure it’s going to be any less confusing, could in fact end up being more confusing, perhaps. Not sure…

1 Like

Well, in the programming world there is the singly linked list which also uses the term nodes to describe its data structure and flows in one direction. Actually, you can just base your entire programming language on linked lists, Lisp for example. I find Lisp one of the easiest programming languages to learn mostly because of this. I think node designers should consider the design philosophy of Lisp in their discussion.

It would be great if you could duplicate a node as an instance (similar to duplicating a geometry as an instance). This would mean you could place instance nodes anywhere in the node tree without any wires connected to them. They’d just act as a means to place the controls you want to tweak all in one location, and without the clutter of the wires.

Their sole purpose would be to drive their counterpart node without you having to navigate through the tree to find it.

An additional benefit would be that you could instance a node and then place the instance closer to the node you want to plug it into, this would mean you wouldn’t have to have wires crossing huge distances, which can make the node graph difficult to read because of the noodle overload. The instances should retain the output socket, but the input socket should be removed because nodes cant accept multiple wires into one socket.

I made a proposal for the compositor, but thought this would also be a good place to share as it’s node related:

Vote it up if you like the idea.

2 Likes

That’s what node groups do -surely their interface will be improved to integrate widgets such as color wheel, enums, etc.

But how would you do this with a node group? Particularly if the nodes you want to control aren’t all in the same place in the tree, nested amongst various other group nodes etc.

I guess the same as we do now with float, color or vector sockets ? From the node group interface in the node editor’s N region. I don’t know how much work integrating all those widgets represents.

I was just saying that what you describe is pretty much the behaviour of node groups, and I have observed overlaps in functionality tend to be resolved rather by improving existing solutions. I may be wrong, in any case I think that’s a good idea.

Why don’t you just create a curve in sculpt and use for every brush. It gives a same path.

JacquesLucke, will the “Function” brunch contains whole set of “Animation Nodes” capabilities at finish? Or it is different direction way of evolution?

Does anyone notice that these node systems (AN, PN etc) in Blender have node management issues.
As a Houdini user, it’s very important for me to be able to build my digital asset. I don’t see the possibility of saving an open node tree here. And as far as it seems to me, it is not possible to copy nodes from one project to another. To make matters worse the setup is more complicated than in Houdini.

That means you have to do the set-up every time, or maybe I’m wrong?