2022-10-31 Geometry Nodes Post BCON22 Workshop

I’m actually working on a proposal for this. I’m no indentured coder, much less with C++, but my last project I had… 3D simulated raindrops that had to be scattered in both time and along a surface.

I got around it by using the alembic cache modifier, and using modulo in an expression to loop a set of frames and retime (yes, time compression and stretching), and after I had a looping animation, I could put a geometry node over it, and copy the loop over the surface. but then they were all instances of one thing, all the frame data at once. It was only 8 frames long, compressed to 4 frames, so I ended up having: 12 variations of raindrop (half were the even 4 frames, half were the odd 4 frames), and 4 timings for each one, from first to fourth frame, for a total of 48 simulations. And I needed a separate collection and geometry node scatter setup for every frame, because each frame needed to persist till frame*3.999999+collectionframe.

yeah it’s really cumbersome and slow, and some modifier attributes couldn’t be accessed in bulk. it needs automation.

Ah, I see, I didn’t consider that- but it still mostly falls within my understanding of the simulation being a separate datablock, and separately addressable/reusable as a thing.

I also in some recent testing noticed some kinds of input boxes in modifiers don’t respond well to mass alt-inputs. is there a legitimate reason this happens? Geometry nodes is all about making things procedural and more automated, and I noticed in alembic cache inputs, you cannot use alt to link multiple objects to one cache file at once. This would be an issue for, say, if you had dozens of explosions in a shot, but they were just the same one using geometry nodes to rotate and vary them.

But the bake button or filepath to bake to are separate from that.

Now this opens up a bit of a can of worms- if you have multiple simulations in order, do they all appear, or are they simulated as evaluated? While I do like this, There’s the issue that simulations have super complex sets of inputs and I can scarcely imagine what it would be like to have to input all the mantaflow options though the modifier viewer.

I believe modifier/sim stuff is getting quite complicated and answers will be hard to come by, be lieu of simulations currently being a separate menu for what is basically a real option-heavy modifier, as I understand it.

I think a “field viewer” would be useful. It can often be quite difficult to debug
at the same time, it’s something that a preset node group would be ideal for.

1 Like

This could be improved.

I don’t know what “simulated as evaluated” means. But one cache/bake should be able to contain multiple simulations, with just one bake button and one file path. They do not have to appear separately in the modifier.

Organizing inputs into panels is being worked on in the context of the Principled BSDF, see the topic on that. If in the future the system to create that UI is powerful enough then it’s not really different than existing physics settings organized in panels.

Though part of the reason there are so many settings is because that’s the only way to add more advanced control. I imagine typical physics node group assets would have far fewer settings, either as a building block to extend with more nodes, or to handle more specific effects.

1 Like

Another Solution to the too much settings would be Maybe to split in into different nodes. For example, have for Each Panel of the mantaflow setup a node like: SimulationDomain, LiquidSolver, CacheOutput…. I think Embergen has a quite nice solution. This would also make the System more modular.

But Maybe It is already planed. I just think it would be Crazy to fit Everything into one node.

On the other hand, then it wouldn‘t be that user friendly anymore, Cause you have to know which nodes to search and connect. Maybe we could have both. Modular nodes which Are accesible in the nodes Editor and one Fluid/Smoke Sim node with all the settings.

1 Like

I very much think that is not the way to go- it would be quite confusing if there were multiple nodes that had no intended use outside being specific inputs for other nodes- that’s just a single node with more to get wrong. As well, mantaflow is a point of contention- it’s getting buggy and less supported, as it’s no longer in development, and is rife with artifacts- it’ll need to be replaced soon I think, poor thing.

Nodes can be expanded and compressed in UI, so the actual space it takes up is no more an issue than the principled BSDF node in shaders. I was mostly referring to how simulations have long had their own menu section that generated a modifier, and that meant both the simulate button and the options were both in the same page of the properties panel.

ah, I simply meant that the nodes have an “order of operations” or “evaluation order” say you wanted a smoke cloud to drop ash particles. you would need to simulate the smoke cloud first, then use the volume info to spawn ash particles. This would of course be easy in geometry nodes-
but what if the volume simulation is INCREDIBLY complex and long (a day to simulate), and the falling ash is a simple one(a couple minutes to simulate)? It wouldn’t make sense to reevaluate the simulation when its inputs haven’t changed, just because a second simulation (the falling particles) needs to be iterated on. Even if it can contain multiple simulations, it’s rarely viable as a workflow to run all simulations in a group every time one of them needs work.

I understand there’ll probably be an option in the modifier AND the node setup to run the simulation, but it may also be prudent to include an option to “lock” a simulation in a node setup, so that pressing the button in the modifier doesn’t dump or re-simulate good or important data.

Ah! I think I heard about that- wasn’t it mentioned at Bcon?
I’m not unhappy about the number of options in current simulations- they’re extensive and useful, but I would be unhappy if simulation nodes didn’t encapsulate at least as much control. We’ll be getting more control in some ways for sure, such as being able to make adjustments to most inputs on a per-frame or even per-data basis (looking forward to making a “shatter” sim that turns off breaking under a certain mass/force limit, etc)

However, simulations come in all shapes and sizes- many have features that simply won’t have a use as a node. Having a simulation be a black-box with a control panel allows for more intuitive control and iteration. It’s also important to consider what would simply be exhausting to need to create with nodes- things like border collisions for fluids, fire and fuel usage for pyro, I find most of what’s currently in physics options to be necessary and generally unusable for most other things, but I also recognize that they could all be expanded on by allowing users to fiddle with values and data between simulation steps, such as adding brownian motion (particles feel like the most freeform sim. the others, like cloth, fluid, and soft body don’t seem to gain anywhere near as much from having internal nodes, if you consider the uses of force fields)

I agree with you, but I think at some Point we need this. Because when you for example have a fluid sim (mantaflow or any upcoming solver) and want to add two emitters. How would you plug them into the fluid node? Maybe join them and put them into a socket: “emitter”?. But what do you do when you want each emitter to emit different colors of fluid, with different intensity for example? You would have to make the menu add settings for each node, which would get pretty messy. I think splitting the settings for the emitters (Add node called fluid emitter) into a seperate node would be more clear. Also interaction beetween different physics systems would be also messy with this method of putting everything into one node.

For mantaflow specifically, an emitter node seems like a good one for now… but it’s still super suboptimal.

This is definitely the question here- and once again it’s not easily solved inside a simulation frame either. To me, the best solution I can immediately come up with is heavily dependent on the actual simulation.

For example- in Mantaflow, the smoke color is input through the emitters(and is keyframeable!), and output as an attribute (color) of the domain object. I assume the color of each sell is evaluated each frame, with cell colors mixing based on the percentage of current color and incoming density. This suggests that it’s best to bundle such options with the emitter objects somehow.

There’s a reason just about every “uber” physics system ever set up is a single black-box simulator. So many things, like air resistance splitting large water drops, fuel being used up to make fire, the updrafts affecting cloth… All things considered, it’s largely impossible to manually set up every possible interaction type, but if it’s all one sim (such as everything is represented by particles in the back-end), it’s rather trivial to have the same base data and physics affect things correctly, if a bit inefficient, because you only need to set up material types and their interactions with objects.

I think interactions will be just fine regardless. There’s functionally no difference between feeding two simulations a collection of the same meshes, or turning those collections into emitters and feeding those into two simulations.
It’ll probably be far more common to feed the output of one into another, such as using the surface of spilled soup, and merging that with the soup solids to emit steam from.

In My opinion, a bigger question, is how to ease the fact that currently, you simply set an object to have physics properties, and it can be part of all or any applicable simulation without further work, but a simulation in Geometry nodes needs to have objects both passed in, and their physics options manually set per geometry node group.

What should emitter workflow be?:

  1. attributes or properties added to objects in nodes (without changing the base object)
  2. a kind of object converted from another object in nodes
  3. a property that may be added to an object via the property editor (like now)
  4. totally contained within a simulation, with the simulator taking mesh data as inputs
  5. a hybrid approach, emitter properties are added per object, but can be accessed in nodes
  6. hybrid approach 2- by default, nodes uses global emitter properties unless overridden
  7. superhybrid approach- the result of nodes can produce an object with emitter properties/attributes
    (number 7 can be added to any other approach, too)

Hey i have a little suggestion for the Edit geometry, why not use a node like this :


And basicly he have :

  • 1 Output (Geometry)
  • 1 Input (Geometry)
  • 1 Button with 2 functions (Edit and Validate)
  • 1 Séléctions bars

Basicly he works like that you select the data (Edge / vertex or face) and when you stay pushed with shift you can select more than 1 exemple : Vertex + Edges or vertex + Faces … Or Vertex Only, Or Edge Only, Or Face only, or Simply the 3 options !

When the selection is ok you just click on the edit button and at this moment all the data as “realized” and like the modifier as applied but is not applied ! he just cached the data at this points !

Basicly you have the basic data and the cached Data, and if you connect this node the cached data are updated by the node and replace by an “Instance” the points (XYZ) data.

You have no limit for adding this nodes basicly the compute works like this :

Mine could be better because you don’t need to click on bake ok select a mesh or anything… all the nodes before the Edit Transform are “Compilated” and unified in one geometry as you can edit, and node after are not taked but if you add one more (Edit transform) again all before are re-compilated for editing !

Less node, same result ! Better view and better intégration !

4 Likes

We’re just looking forward to the development of simulation branches as an early version of the modifier’s way of caching execution data.
These nodes are of the same category, just in manual input.

1 Like

Is not specially for the simulation branch is for everything ! and don’t forget in he edit geometry nodes you can move some points but you can make too some extrusions and edit all topology ! because is an EDIT ! Just he replace the original data by a rewriting cached at different state

I’m LOVING the simulation nodes branch!

After playing around with the branch from the 24th of Nov 2022 with the “Use Cache” tick box on the Simulation Output Node, it’s clear how essential caching and baking data from a simulation loop is. I think a dedicated set of baking / animation data output/input nodes would make geometry nodes really powerful. Here are my thoughts:

Cache Node

  • Records geometry data each frame / simulation step locally in the .blend file. Each cache can be named similar to the Store Named Attribute Node and recalled at other points from a “Input Cache Node”. This would also allow you to use cached data in other node groups.

Bake to Animation Node

  • Realizes Geometry to a new object (similar to applying a Geometry Node Modifier), with cached simulation data converted to the animation system keyframed on the timeline. My thinking here is that the original object with attached GeoNode group would be kept but placed in a new collection and that collection deactivated. The node would have a “Bake” button and a “Bake to Frame” / Bake Sub-step to Frame" drop-down. As long as the vertex IDs are intact you could then blend simulations outputs together in the NLA!!

Output Cache Node

  • Outputs cached simulation data to an external file, works similar to the File Output Node in the compositor. With ‘select file path’, and ‘file type’ drop-downs. As well as assignable inputs for attributes. Ideally this would also support conversion to common file types for use in other 3D packages.

Input Cache Node

  • Allows you to reference cached data within the project by calling its stored name, as well as allowing you to input data from an external file using it’s file path. Would be cool to have a ‘run’ input and ‘steps’ and ‘sub-steps’ outputs, and assignable outputs for attributes cached along with the geo data.

I’m sure you’ve probably got strong ideas about how you might implement caching from Simulation Nodes, but wanted to share this idea because it got me really excited.

Also, inputting Animation data (like mocap data) directly into geometry nodes through something like a “Input Cache Node” would be really cool!

Thank you for your amazing work!!

Hank

4 Likes

Whether the node group can implement the parameter folding group function