Experimental rigid body simulation with geometry nodes

GIFs (click to expand)

dominoes2 rubble2 sparks2


This is an experiment to demo a combination of geometry nodes with rigid body physics in a “vertical slice”. The goal is to identify problematic areas and potential future goals.

I want to stress that is also NOT an official planning document or roadmap, just my personal exploration project.

WARNING!

This branch is unstable and requires a particular way of setting up nodes to make the simulation work. Existing features, like regular rigid body objects and the point cache, are partially disabled. DO NOT use this branch for important work!

Git branch: geometry-nodes-rigid-body-integration

Geometry nodes and iterative simulations

Since geometry nodes have landed in Blender releases several attempts have been made to use them for iterative simulations. The core idea is to feed back the output of the nodes (usually a mesh) into the next iteration. With some quasi-physics nodes one can get some pretty nice results:

My own exeriments:

These experiments require some amount of python scripting to “close the loop” and feed the output mesh back into the nodes.

Python code copying depsgraph results back to the mesh (click to expand)
# Copies the depsgraph result to bpy.data
# and replaces the object mesh for the next iteration.
def _copy_mesh_result_to_data(src_object, dst_object):
    depsgraph = bpy.context.evaluated_depsgraph_get()
    src_eval = src_object.evaluated_get(depsgraph)
    mesh_new = bpy.data.meshes.new_from_object(
        src_eval,
        preserve_all_data_layers=True,
        depsgraph=depsgraph)
    mesh_old = dst_object.data
    dst_object.data = mesh_new
    bpy.data.meshes.remove(mesh_old)

One downside of the python-based simulation hack is that it does not interact well with existing physics simulations in Blender, none of which have node-based integration yet. I decided to try and go a step further by implementing a C/C++ version of geometry-nodes-based simulation, in a rough-and-ready way to test feasibility.

This experiment focuses on rigid body simulation, but the findings would apply to other physics simulations too.

Features

Simple “geometry cache” to enable the nodes modifier to access data from a previous iteration

This is somewhat similar to what the Point Cache feature does, but i decided to write my own simply because it would be easier to just copy GeometrySet instead of working around the outdated API.

Geometry cache API

Inserting geometry into the cache

Pulling cached geometry back into the nodes modifier

Extension of the Bullet SDK integration in Blender to add and remove rigid bodies as needed

Current rigid body simulation is limited to individual Object ID blocks, but i want it to simulate point clouds (aka. “particles”), instances, and mesh vertices (which is a bit silly but basically comes for free).

A number of specially named attributes are used to export relevant data to the rigid body sim, such as flags for enabling bodies and properties like the initial velocity. Currently these are just regular named attributes, this would become more formalized builtin attributes eventually. After simulation the body transforms are copied back to the relevant position and rotation attributes of points and/or instances.

Adding rigid bodies based on GeometrySet data

Updating transforms after the rigid body simulation step

Depsgraph relations to ensure correct order of operations

The rigid body simulation is running on the scene level, outside of the geometry nodes process. The output of the node graph is an instruction for the rigid body simulation, not the simulation result itself. The dependency graph makes sure that the modifier is evaluated first, then the rigid body world updates internal rigid bodies for points we want to simulate. After the simulation the rigid body motion state is copied back into the simulated geometry, and finally stored in the cache for the next iteration.

Depsgraph relations for node <-> rigid body interations

New geometry nodes to spawn rigid bodies and collision shapes

Each body in the simulation needs a collision shape to interact with the world. Collision shapes can and should be shared as much as possible. Simple particles can all use the same basic sphere shape. Individual shapes can be generated based on instances, much like the Instance on Points node generates visual instances.

The current approach is to create shapes separately, and then assign them to bodies using a simple index. This requires some care by the user if different shapes are used, but is the simplest way to map bodies to shapes for now.

Multiple shapes can be created using either a set of instances (similar to the Instance on Points node) or a fixed amount of variations for primitive shapes can be created (boxes, spheres, capsules etc.). Compound shapes are supported by Bullet and are useful for use with convex decomposition, but require some more design and testing.

A typical setup is to generate geometry only on the first frame, or spawn particles based on some emission rate. These new instances or points are then joined with previous iteration geometry.

A specialized new SimulationComponent class is added to GeometrySet. It contains simulation data that can not be encoded in point attributes, such as collision shapes. This is more of a hack to get inconvenient data to the rigid body simulation within the existing data structures, but doesn’t work so well as a user-facing concept.

SimulationComponent

Further steps

Based on this experiment, here are some ideas for future simulation support. This is on top of the features already implemented here.

Systems Design (big ones first):

  1. Implement a “visual debugging” feature to help users investigate hidden physics issues. Should record collision shapes, contacts, forces, custom events, etc. over time. Viewport overlay to show such data, make it selectable for reading numeric details. I would consider a feature like this essential for both users and developers. Quite a bit of work, but not so difficult to design (many game/physics engines have some variation of this feature).

  2. Make the rigid body collection an optional feature for organizing bodies. Rigid body world should not need it as optimization to find bodies. Forcing object into this collection adds automatic behavior that is not very transparent. Collections are a user-level feature that shouldn’t be abused like this (IMO).

  3. Dedicated node tree type for simulations (partially tested here):

    • Includes special inputs like time/frame, time step

    • Automatic merging of newly spawned geometry with the previous iteration. This could be done in separate “stages” like emit vs. modify.

    • Limitation to “simulatable” data types (points/instances for particles and rigid bodies, meshes for cloth, curves for hair, volumes for fluids, etc.)

  4. Support “events” in simulation data, such as collisions (contact points). Those would be stored in the geometry as output from the simulation step. The nodes can then use this data to change geometry, kill and spawn particles, etc.

Overhaul existing implementations:

  1. More flexible runtime cache implementation to replace the aging PointCache. Needs better serialization for all kinds of data. I tried this back in the day using Alembic, but that was not a great solution (Alembic is designed for DCC-to-renderer communication).

  2. Clearer distinction of editing vs. timeline changes in dependency graph. I want to reset the simulation when the user changes node settings which require re-running the simulation. The cache should only be reset after such changes, but not on regular time source updates.

  3. Node graphs potentially need to support suspending of the exeution task, or splitting into different stages. During a single frame update there are parts that need to happen before rigid body sim (spawn particles, update physics forces) and parts that need to happen after physics (react to events, apply physics motion, update render data). Currently the post-sim steps are a simple enough to be hardcoded, but it could be important to modify data based on the new frame’s physics state to avoid laging visuals.

  4. Good opportunity to move BKE rigidbody code to C++. Current uses a mix of C and C++ which is a bit fiddly. Using C++ would remove the need for the RBI capi. Calling code is mostly C++ already, so not many C wrappers are needed, if any.

  5. Current Bullet integration in Blender does not use collision groups and masks to their full potential. It uses a custom broadphase filter to add collision groups, on top of the internal group/mask system Bullet already has. This is easier to manage with collections (formerly “groups”) in Blender, but it excludes the possibility to have one-way interactions and disable self-collisions. These are essential features when using rigid bodies in large numbers as visual “particles”, because it drastically reduces collision pairs that need to computed.

Gnarly Areas

Just some details about things that currently don’t work quite right, both old and new.

Depsgraph complexity

The depsgraph needs additional nodes to handle the back-and-forth between geometry evaluation (i.e. the nodes modifier) and rigid body simulation. The geometry evalulation gets and explicit “DONE” node after the main eval, which is the new exit node and only finishes after rigid body syncing of transforms is complete. This might cause depsgraph loops in more complex cases, hard to tell.

Clearing cache and resetting simulation

Resetting the simulation requires removing and adding back rigid bodies. It’s preferable to avoid allocating and freeing btRigidBody instances a lot, so eventually there should be a memory pool for this purpose. Adding and removing bodies from the world is still a bit buggy, it seems that old contact points are hard to remove when reseting the simulation. Current Blender integration can ignore most of these issues because it only adds bodies once and destroys everything (including the world itself) on reset of the timeline.

Memory management in rigid body world is dodgy

BKE rigidbody memory allocation is broken. Adding and removing bodies at runtime causes frequent crashes because of dangling pointers. Only works so far because objects are only added once and the whole RB world is destroyed and recreated on frame 1.

Mapping points/instances to rigid bodies requires unique and persistent IDs

I currently generate IDs in a somewhat crude way from the index, added on top of the largest ID of the existing points. This needs a more robust and automatic way to generate IDs for points that remain persistent over the whole simulation time line and don’t repeat even if particles are destroyed.

Applying rigid body transforms

Render data (points and instances) should get updated after rigid body simulation to include the simulated transform. Otherwise rendering will lag behind rigid body simulation by 1 frame.

The modifier node evaluation happens before rigid body simulation, so for now the post-sim rigid body sync is hardcoded to just update known location and rotation attributes of points and instances. Eventually this should be more flexible, allowing render data to use transform offset, and/or reacting to transform changes in a customizable way (e.g. change color of particles based on position). May require splitting the node eval into different stages.

39 Likes

Did I read it wrong ? or can we now do simulations on instances that aren’t yet “real” ? (that would increase performance and save lots of memory), I always found bullet limited because of that (can only work on real objects, on top of lack of any GPU acceleration…).

Yes, i can simulate point clouds, meshes and instances. The former would be used for particle effects, while the latter are useful for larger objects (but still can be generated dynamically and in greater quantities than is practical with conventional Objects).

There is a range of options for how bodies interact. For large particle counts a simple collision shape is preferable and collisions should be “one-way”, i.e. particles bounce around but don’t push other bodies and don’t self-interact (as shown in the “sparks” example).

Just keep in mind that it’s a proof of concept, not a polished feature. If, how, and when something like this might land in Blender is completely open.

6 Likes

Implement a “visual debugging” feature to help users investigate hidden physics issues. Should record collision shapes, contacts, forces, custom events, etc. over time. Viewport overlay to show such data, make it selectable for reading numeric details. I would consider a feature like this essential for both users and developers. Quite a bit of work, but not so difficult to design (many game/physics engines have some variation of this feature).

This has been worked on during the previous GSoC:

I don’t think it would be too much work to do some cleanup and integrate it into master.
However I haven’t had time to do it, so if you feel like you want to have a go at it I would be really happy.

3 Likes

Oh nice, i didn’t know about this GSoC project. I’ll have to take a closer look, thanks.

IMO such a visualizer would be a lot more powerful in combination with a good caching system, because that would allow scrubbing through the timeline, instead of just looking at a single frame. Physics debugging information could be stored in the cache optionally and temporarily, then adding a viewport overlay isn’t so much work any more. I’m beginning to think that an upgrade of the point cache system should have priority.

If i find time i will make a more detailed plan for what i’d like the cache to be able to do, and how that could be achieved. It’s so foundational and shared by all physics systems that i think it’s worth spending time on before implementing some of the more shiny new features.

3 Likes

Simple “geometry cache” to enable the nodes modifier to access data from a previous iteration
Geometry cache API
Inserting geometry into the cache
Pulling cached geometry back into the nodes modifier

This is a feature that is very valuable by itself. It was first discussed in the Geometry Nodes Design Document and abandoned/forgotten about: ⚓ T74967 Geometry Nodes Design

It was also asked about on Right Click Select idea aggregator website:

  • https://blender.community/c/rightclickselect/qQDR/?sorting=hot
  • https://blender.community/c/rightclickselect/081V/?sorting=hot

While there is no open issue for Geometry Nodes Caching, I see that you implemented exactly that.

I also am very interested in this feature, because complex Geometry Nodes setups seem to re-compute every node group every time, even for the node groups that have not changed their inputs and should not be re-computed.

There seems to be no work-around for this lack of caching support, save for creating many intermediary invisible objects that contain the data one wishes to cache.

I believe you have made huge progress towards this feature and would like to ask you:

  • do you think a Geometry Cache node could be implemented using your caching work?
  • can this caching be completely automated (e.g. automatically memoization to disk between frames when inputs have not changed) or would it require a new “Cache Node” as discussed in design document?
  • if I can help with development and testing of this new Cache feature. Where would you recommend I start from your fork in trying to implement this feature?
  • should we open a new Geometry Nodes Caching ticket in the issue tracker?

Thanks again for your work

1 Like

That’s not the case here, it’s been discussed a few times in various planning tasks, and in the code blog last year. There are just so many competing priorities that we haven’t gotten to implementing it yet. There are two possible caching implementations, automatic caching and a more user-oriented cache node. We’d like to have both, ideally. The recent developments on the geometry nodes evaluator will make this easier. One idea is to keep track of which of the node network’s inputs have changed and invalidate cached data when they change.

11 Likes

The implementation i made is as basic as it gets, just enough to get the job done. It’s just a placeholder, storing 2 GeometrySet copies for the previous and current frame. The final version of this system will require a more thorough rewrite of the PointCache feature, integrating other caching formats (VDB etc.) and providing a common API to handle cache insertion, interpolation, persistent storage (disk), and more.

While i can see the value of automatic caching i’m also a little bit worried that it might be difficult to make robust and reliable. “Automatic” means that the cache is invalidated whenever input data changes, which is easy to say but difficult to determine reliably in practice. We can’t do a bit-by-bit comparison of all data, that would be ridiculous. We might want to use the depsgraph to flag changed data, but currently the depsgraph isn’t actually data-aware, it just handles “operations” without any concept of what data these are changing. If input data (geometry) is consistently cached itself we could perhaps have “data revision” numbers that can be compared (could this work with linked libraries?). I would definitely recommend adding an “erase” button for users to clear the cache manually and force a rebuild in case this system fails and gets stuck on old data.

I’d leave it to the core devs to create a task for this in the tracker. If you want to help you might start putting together a list of required features, figure out which of them exist in Blender currently and what’s actually new. Always good to have more eyes on this.

2 Likes

The debug information was stored in the point cache during the gsoc project as well. I hope the code will be useful once the new caching system is in place. I would be happy to help with the cleanup!

5 Likes

Ah, nice to see that you are still around!

If you still have time for some coding, perhaps you and Lukas can join forces and extend the current point cache system? To me it seems like it would be a good first step to modify the existing system in place as a first step.

I actually don’t fully remember what is left to be done with the GSoC code, so perhaps that is something you and I should work on and see if we can get the code merged.

5 Likes

I think I described the automatic caching idea incorrectly. We’ve mainly thought about it as a way to make tweaking values in the node group faster when nothing else is changing. For example, a set position node after a boolean node could be fast to tweak if the result of the boolean calculation was cached. In that case we would just have to distinguish an update that came from inside the node group from one that came anywhere else in the depsgraph.

Of course it would be nice to be able to use the depsgraph to know which external change caused the modifier to recompute, and some of the ideas you mention sound worth investigating when the time comes. But it’s also probably much more complicated like you mention.

3 Likes

I just wanted to add a comment here about a useful node for GN, not only sim, but it’s related to this, and it’s a node called “Field at time” that will evaluate the graph at an specific frame and give a value, so you can get an attribute at that frame, for example computed UV’s, and apply it at any point in time.

I think this is deeply related to this part of the developmnet since it’s also mentioned that right now the “trick” is to have a full copy of the GN graph for the previous frame and the current frame :slight_smile:

4 Likes

We’ve mainly thought about it as a way to make tweaking values in the node group faster when nothing else is changing.

Yes, there seem to be different expectations of where caching would be employed:

  1. Inside node trees to retain the output of individual nodes. Avoids downstream node evaluation.
  2. For the entire object’s geometry output. Avoids dependent object updates.

How to decide which nodes or objects to cache persistently, without racking up too much memory? Easiest solution would be to give users control, so they can decide which nodes are expensive enough to warrant caching (maybe some nodes like boolean can have caching enabled by default).

The problem of detecting cache invalidation does not exist on the per-node level because nodes are associated directly with the data they output. Depsgraph only has very informal mapping to object data with its component types. Using the depsgraph for invalidating a cache could potentially have quite a few false positives and negatives:

  • false positive: cache is invalidated even though the actual data is not affected, leading to unnecessary recalculation.
  • false negative: cache is not invalidated because input data is not considered part of tagged components, so it gets “stuck” and users scratch their heads why things don’t update.

The Object::runtime data is essentially already a cache with arbitrary data, but it’s only used for “caching” between depsgraph operations in the same frame and not persistent. The Point Cache has persistent storage, but it’s not general-purpose enough currently to support all types of geometry. Point cache is also only available in certain contexts (physics sims) and not as general caching, let alone providing a cache per node.