Simulation Nodes

I’ve been using Simulation Nodes to try and simulate terrain erosion, with the eventual hope of having a system that works on arbitrary (non-grid) topologies.

My first attempt is an ultra-approximate sim of dirt/sand tumbling down a slope and settling at the bottom. The approach I want to use is to iterate over each vertex, compute how much is lost as runoff this frame (and maybe in what direction); compute how much is gained from neighboring verts’ previous frame runoff; then change the vert’s height accordingly.

The problem I’ve hit is that there seem to be no way to read data from multiple verts connected to the current vert. If I could do something like “Select All Connected”/“Expand Selection,” I could mask out the unimportant points (runoff can’t enter from below :P), then use Attribute Statistic to get a sum of the above verts’ runoff data and (I think) get the result I need…

In the current toolkit, the closest thing I’ve been able to find involves weighting the “Edge of Vertex” node by whether the edge’s end vert is above the start vert, then if it’s not, writing all runoff to that ONE end vert–but that’s a stupidly complicated setup and I haven’t been able to make it work right ¯_(ツ)_/¯
Second-closest thing is using Blur Attribute. This is a pretty terrible workaround because it doesn’t actually do what I need–it’s just the only node I know of that (presumably) reads data from all connected verts. Not much of a similarity, but it seems to behave better than the edge-of-vertex method.

tl;dr a way to read data from all connected verts (with a selection input, preferably) would be lovely.

EDIT: Delta Time? As a game programmer, I can see that being very handy in some cases. This is perhaps not one of them, but it’s good to have the option.

2 Likes

This setup by @modmoderVAAAA samples the index of all neighboring vertices of a vert based off the index input at the start.

2 Likes

Maybe I misunderstood you but have you thought using something like wet paint in simulation nodes. You can accumulate sliding points by adding their proximity to store attribute in simulation. I have been also experimenting with this recently:

6 Likes

No showstoppers for me. Overal very pleasing experience.

As others mentioned computer sometimes staggers when simulating over 300-400 frames, not always, but sometimes even with the light sized simulation.

I don’t fully understand how to use delta time but I multiply forces with delta time then add them to velocity. Is this the right way?

I do have motion blur issues maybe related to delta time? or too fast moving particles?

2 Likes

Problem with changing domain size. This seems to have been resolved with the ID attribute on the point cloud.

Oh, great. Such a frustration, expecting nice smooth motion instead you get a messy blur soup.

edit: yes this is resolved by putting “set id” in the simulation zone, thanks!

showstoppers

I was hoping to do some cheap and easy jiggle physics, but it seems simulation nodes kills any movement an armature modifier before it does. Even just a simulation zone that does nothing freezes the mesh. I guess this is not implemented yet? Or am I missing something?

A workaround would be parenting empties to needed bones, and using those to deform, though this is assuming the jiggly bits are separate. Practically, they can be separated, then welded back with another GN modifier that will weld just the specific vertices, so no headaches with normals, but that gets even more tedious. Jiggle physics with purely object movement is perfectly doable though.

The usage of delta time depends on the simulation method you’re using. For verlet integration, it’s written into the base equation.

Hmm, will have to look into that. Mine is probably a code collage then. Thanks!

Yes, I know about that method. What I’m trying to say is that my current solution is not viable at the moment, and I would like to see it become viable.

(though with what @Alex_R_Heskett shared, I may just be straight-up wrong about it being non-viable…)

1 Like

Wait, Edges of Vertex actually does that? The docs said it only outputs the index of ONE edge… Now I’m really confused!

(although, I think that “equal to index” setup may be simpler than the one I made :slight_smile:)

that is correct, it outputs the index of one edge. With my setup you can change the index input to select each neighboring vert.

Regarding this, I recently encountered the same problem (limitation?)

while trying to create a simulation of a solar system. The best I managed to do was to create celestial bodies that were attracted by the gravity of a star, meaning

Point A1 attracted towards point X with a force F1
Point A2 attracted towards point X with a force F2 (and so on)

So, I couldn’t get a point to be attracted to all the other points.

But I found this video that explains very well how to solve and store the attraction forces between all the points themselves.

With this, in any set of points ALL the points are attracted to ALL the other points. Following the video and after many trial and error tests, I managed to make something that “kind of worked”.

File >> Galaxy Simulation.blend - Google Drive

Since it’s slow you’ll need to bake it :pie:

I guess it could be optimized by using a radius for each point to only calculate the attraction force from the closest points :thinking:

I used it for a while. But then I started to ignore it because

(A) if a simulation is very simple, it already runs smoothly and I don’t need it
(B) if the simulation starts to get slow and complex, I rather bake a range of frames (I don’t mind waiting)

4 Likes

My 5 cents into feedback and complaints.

Overall, the Simulations look outstanding and work fast and smoothly!
However, from an artistic perspective, it lacks high-level building blocks. For example, in many other DCCs building a particle system from nodes means manipulating velocities, positions and other attributes based on sim time, particle age or other attributes. Blender Geometry nodes are a bit more low-level in that regard. So to recreate what’s possible with the old particle system, users must set up their own high-level nodegroups or use someone’s setup. Which adds a bit more hassle when it comes to relatively simple setups.

What I’m trying to say is that it would be super-handy if the next versions of Blender are shipped with pre-built high-level nodes as standard assets in Asset Library for particle systems and simple simulations. Similarly, like it was made for Hairs.

Also, a few extra nodes for collisions and more generic finding neighbour points are highly anticipated! =)

4 Likes

I think you’re just missing something.

From my understanding this is how simulation zones work:

  • On Frame 1, the simulation zone takes an input, processes it, then outputs the result.
  • On Frame 2, the simulation zone takes the output from frame 1, processes it, then outputs the result.
  • On Frame 3, the simulation zone takes the output from frame 2, process it, then outputs the result.
  • etc.

The issue is after frame one, the simulation zone’s only input is the outputs from the simulation zone in the previous frame.

This is the expected behavior.

This means if you pass your character through the simulation zone, then the simulation zone will take the mesh input and pose from Frame 1, then disregard any animations done on subsequent frames. This is probably the issue you’re dealing with.

I’m sure with enough effort, you could use simulation nodes (or the outputs of simulation zones) to interact with your character and add Jiggle physics. I’m just not sure how you’d do that.

The setup I posted earlier can be used to accumulate forces from all other particles like you’re describing. I’ve done gravity sims in the past with this method and it works great

1 Like

Ah, I see…

Well, I would expect data outside the simulation zone to be fed into it as the simulation progresses, the simulation state being separate and not affecting anything else, rather than just at the start, and I would really expect a blank simulation zone to not change anything at all. Cloth/softbody simulations work with animated deforming modifiers before them, this should too.

I do have a workaround as I said, but it’s fairly inconvenient, and at least I could hope for the ability to get a bone’s matrix/locrotscale directly, without an extra object, or the ability to do what the armature modifier does in geometry nodes.

:fish: :fish: :fish:
I made some swarm controllers for my 2D underwater universe.

It was kind of a struggle at first, but rather because my coding experience isn’t that great. so I had to repeat some concepts for days until they clicked. but now I really like the workflow. Not quite sure about the execution order. In my example I had something slightly different in mind but swayed to a more simple approach which I might have consolidated in a single object and tree. But for now these are 2 different Objects both with their own simzones. So I wonder which one is executed first each frame.

I also made some simple hairy tentacles for my jellyfish with sim nodes that have inertia and sway in the water, needs a lot more work though.

  • I never use the delta time. It’s just not useful for my projects as far as I can tell.
  • I don’t know if that is already possible: I would really like to have a node that would give access to system time, so I could make the procedures a little less deterministic.
  • I had some strange behaviour, when I baked stuff to cache. In the end I had to delete the cache folders by hand so it wouldn’t crash and to show the correct GN solutions.

(nothing to do with simulation nodes, but is anyone else having a blank icon in the windows taskbar for the latest few alphas? And I’d like a new splash screen for the alpha, too :smiley:
Sprite fright is great and also 2 years old :man_beard: Sorry, didn’t want to make these a forum topic anywhere :fish: :fish: :fish:)

5 Likes

Hmm. That’s not really what I need… I’m looking for something that selects all connected verts in one fell swoop. To do that with your setup, I’d need a serial loop through every sort index!

Hi, I’m finding simulation nodes to be very useful.
I am using simulation nodes to create (particle/wave/etc.) simulations for real-time (game) experiences.

I mainly lament the lack of access to camera data-
With camera data, I could adjust the level of detail of my geometry based on distance from the camera,
I could delete geometry that was generated but is not on-screen, etc.
When creating a simulation with lots of instanced objects or a dense polygonal mesh, having the camera data to do these kinds of optimizations is pretty important!
I easily make camera data available in my game engine’s compute/task shaders when they do procedural geometry work without issue-
Why can’t I access that camera data in geometry nodes?

Personally, I do use Delta Time a lot, as I feel it is important in my node trees for making my simulations work consistently regardless of the scene FPS (my use-case(s) are Real-Time use cases :slight_smile: )