First off, just wanted how amazed I am by all the development happening, I hadn’t had a chance to test this branch for a few months and I was impressed with all the changes, Awesome work @jacqueslucke !
Also I’m totally onboard with the idea of the nodelib, I think it opens up a lot of potential for community participation; obviously having a better way to organize the different files containing the node groups potentially via the upcoming asset library would be a huge plus, but just having a path to a central repository of custom-made node groups is already incredibly useful.
(it reminds me a bit of how Houdini does it: on startup, or using a .env file, Houdini let’s you define a few environment variables for paths to look for Houdini Digital Assets (HDAs), mainly $HOUDINI_OTL_PATH and $HOUDINI_OTLSCAN_PATH but it also looks in $HOME, that way you can have both per-user as well as studio wide asset libraries).
One thing that I think would be a super useful addition is, for the bparticles nodes that take an object input, like Mesh Emitter or Mesh Collision Event, to have an option to “use modifiers” in other words, that the node would apply the modifier at the current frame before evaluating its input.
As a very simple example of why this would be useful:
The image above contains a plane that has been subdivided and also has a displacement modifier applied. The idea is that this plane would later be used as an emitter for the particle system, however since right now there’s no way to get the result of the modifier, the particle system simply scatters new particles at the position of the undisplaced plane.
The same would happen if you tried to use the Mesh Collision Event node, the collisions would be detected in the wrong places since it’s not picking up the resulting mesh after applying all the modifiers.
My original motivation for asking for this feature is a bit more complex, but I still think it would be incredibly useful:
In the first image there’s just a simple particles system with some particles falling down, applied to the “Particles” object.
In the second image however, we can see that the “Deform Plane” object has a Function Deform modifier applied to it, and the node network for that function deform modifier is trying to use the result of the “Particle” object to deform the plane, in other words, using the output of the particle system as an input to the deform node network in order to drive the deformation based on the particles.
This might be a use case that we don’t intend to support, or it could be better achieved in some other way that better aligns with the design goals of the project (for example, keeping everything in a single node network that can run both the particle sim and the deform operation, and has multiple outputs (or conversely, the modifiers on different objects can “call” a different node from the network as their output)),
(also, linking modifiers in this way could introduce cycles into the graph, which I assume would be harder to detect than if everything where placed in a single network, it would be easier to not allow that operation or to alert the user with an error or warning).
However, I still think at least for the simple case outlined earlier, having the functionality of bringing a mesh into the network with all the modifiers applied would be beneficial.