Experiment: Rigid Body Physics in Geometry Nodes

Rigid Body Simulation in Geometry Nodes

This is an experimental branch for testing rigid body simulation in geometry nodes. The general idea is to present rigid bodies, constraints, and other parts of the simulation as domain data with attributes, similar to other kinds of geometry.

Test branch can be found here: https://projects.blender.org/blender/blender/pulls/124093

This approach is quite different from an earlier test i made, which was a lot more object-based. That branch relied on a common physics world between objects, which requires complicated depsgraph relations to make sure geometry can synchronize with the physics world.

The new code is completely separate from existing Bullet integration in Blender. All aspects of the simulation are handled inside a new GeometryComponent, which keeps the code nicely compartmentalized and does not require substantial changes to existing geometry code.


What the system can do at this point:

  • Create rigid bodies, collision shapes, and constraints as elements in the physics component.
  • Advance the simulation by time steps.
  • Access properties of rigid bodies and constraints as attributes of the physics component.
  • Add forces, torque, and other effects dynamically.


Here’s a test of a cannonball hitting a brick wall. It’s using breakable constraints of each body to the world, which get activated by the collision.

How it works

All the physics data lives inside the geometry set, rather than a separate physics world inside the scene. Bodies and constraints are associated with the Point and Edge domains of the geometry respectively (dedicated domains for
bodies and constraints may be added later).

The geometry component can also store a physics world that is responsible for the actual simulation data. It’s ok to have bodies and constraints without a physics world in a component; this is usefull for creating bodies separately and then adding them into an existing world using a Join Geometry node.


Time-stepping the simulation happens with a simple “Time Step” node. This can be used in a Simulation Zone for conventional physics simulation within the scene timeline. It could also be used as a standalone node, e.g. for one-off collision detection. It could also be used in a Repeat Zone to execute a fixed number of simulation steps all at once, e.g. for a physics-based node tool.

The component defines a set of built-in attributes for the properties of rigid bodies. This ranges from basics like mass, position, velocity to more specialized features like rolling friction, sleeping thresholds and more. These properties of the underlying physics implementation (Bullet engine for the time being). To users they show up as attributes in the spreadsheet and can be accessed using existing techniques.

In addition to built-in attributes that map to the Bullet btRigidBody instances there is also CustomData for bodies and constraints. This enables dynamic attributes associated with those domains.

Constraints are described in the edge domain, because they connect two rigid bodies each, which resembles the topology of an edge-mesh. Target bodies are identified by simple index attributes (pointers are used internally). Adding or removing bodies will automatically remap those indices. Constraints also function with only one valid body (index -1), in which case it gets constraint to a fixed point in object space.

Technical Challenges

There are some aspects of the design that require more attention.

No Copying of Physics Data

The physics world internally holds a lot of additional state for bodies, like broadphase pair caches, active contacts, and more. For this reason doing full copies of rigid bodies and re-inserting them into the world is not great for performance (and actively discouraged by the Bullet API).

The conventional data flow in geometry nodes leads to copies of data frequently being made (mostly optimized out by using implicit sharing):

  • Node output geometries are copied to linked node inputs. During the copy the component is usually immutable (2 concurrent users), but in simple pass-through chains becomes mutable again when the next node in the chain is executed.

  • Branching creates 2 permanent users of the geometry component, which makes a copy necessary. This applies even in “simple”, branches like Viewer nodes, where one branch is read-only.

  • Realizing instances makes a copy of the input data. This is of course correct for actually realizing instances but is also used in the case of joining geometries: the input geometries are copied, merged, and then the original data is released.

In the cases mentioned above additional copies of the physics data would be costly and technically unnecessary: the physics data is usually moved from one node to the next, adding and removing bodies, advancing the simulation, etc. To ensure physics data is never actually copied the component uses the following concepts:

  1. Physics data gets moved rather than copied when making it mutable.

    This means only the first copy has “real” physics data. Any subsequent copies will not be able to do simulation. This violates the design of geometry nodes somewhat, which treats all copies as equal. Which copy is “first” can be random, so node trees need to be constructed very carefully to avoid ambiguity.

  2. Physics data is cached after moving.

    When physics data is stolen by another geometry, the source will use a cache to provide read-only data. In this state the physics component becomes basically a point cloud with simple attributes.

While the system works as-is, a more robust and predictable solution is needed here.

Constraint Types

All constraints are treated equally as part of the Edge domain. Constraints are usually categorized by type (ball-and-socket, hinge, slider, cone-twist, etc.). This type is superficial: constraints can be described in terms of the generic 6DoF constraint, the type just makes it easier to understand the purpose and behavior of a constraint.

The constraint type attribute exists, but is read-only. Changing constraint types can be done with a special create_constraints function, but not by simply writing to an attribute. Since enums are not supported as attributes the constraint type is currently a plain integer, which is far from ideal. A node could be added to “translate” this value into something meaningful.


Conventional workflows for setting up physics are based around defining assets, then inserting them into the physics world for simulation, and finally transfer data back to the original object. While this can be implemented to some degree with the “realize instances” mechanism, it only works for relatively simple objects where properties don’t change after the simulation has started.

More complex simulations require, for example, effectors that react to scene conditions, change strength based on animation, and much more. These kinds of features require deep integration into the simulation nodes, so that the nodes coding the behavior can be executed as part of the time step. This makes it hard to build extensible systems that could be extended without touching the simulation itself.

This kind of problem is not unique to physics simulation and will come up in other areas. Some ideas have been proposed to “inject” node graphs into existing node setups, but it’s not a trivial problem.

Future additions

Some ideas for things to add:

  • Drawing for physics components

    This would be useful for debugging purposes to get a quick impression of the state of the physics world. Currently a complicated setup for transferring data to visible instances or similar is needed to view the placement of rigid bodies. Physics data should not show up in renders directly, but displaying a viewport representation is necessary.

  • Contact point output

    Getting information about contact points between rigid bodies could be extremely powerful. This can be a simple point cloud that encodes current contacts and their state. Since contacts are essentially just temporary constraints they might also be included in the constraint/edge domain (but harder to distinguish from user-defined constraints).

  • Collision triggers

    Physics shapes can also be used purely for collision detection, without any direct dynamic response. This can provide a feedback mechanism to trigger custom behavior.


This looks interesting, any chance you could share the .blend setup for the wall? Would like to take a peek at it.

SDL2 input / mouse / openXR input nodes planned?

‘state machines’ ?

Great to see this progress! I had a play with the branch, and while I could see some rigid body shapes positions moving in the spreadsheet, could not work out how to get the transform data back onto a visible instance.

Would be great to see a full node layout screenshot of a super simple example of the workflow. Just a sphere that drops onto a static plane would suffice.

FINALLY!!! :heart_eyes: Been waiting so long for physics to be implemented into Geometry Nodes. Any idea when new advanced solvers like XPBD will run the system instead of the old Bullet engine?

So one question and this is based off some of the tests a couple of people have done with the existing setup in Blender to try and ‘hack’ together cloth like simulation, etc.

Does that ‘Time Step’ node support sub-frame testing? My current understanding is that Blender simulation nodes do not and hence collision detection starts to fall apart as so as anything moves a little quickly.

I want to clean up the examples i have so far, then i’ll post some examples.

1 Like

I usually just use Sample by Index to copy transforms back from rigid bodies to rendered instances. This may require an index offset since there are typically other bodies like static colliders in the world. This approach works for simple scenes, but in the long run we need a better way to keep track of a subset of bodies by group ID or something. Sample by ID could maybe help with that.

1 Like

Still not that moment.

1 Like

I’d like to take a look at Jolt, but i also wouldn’t dismiss Bullet too early. Yes, the API is showing it’s age, but it’s a proven system. Main goal here is integration into geometry nodes, so i don’t want to get too deep into the weeds trying to implement XPBD just yet.


That’s a good point. The node currently just invokes a single timestep on the Bullet side. One could simply split the Delta Time and run multiple iterations in a Repeat Zone per each full frame step. That way updates and triggers can be run before/after each substep. More control over the fixed internal step size and other world parameters can also be added to the node.

Thanks. Alas, I am still not getting a visible result. Could you post a super simple screenshot or scene file for a 2 body setup, like a sphere dropping to the ground. Cheers!


This is great.
I hope that this system will be “Unit Scale” aware. Not a single system I used so far (Blender or plugin) took into account the unit scale.
Even a “factor socket” just like “delta time” in simulation zone would be useful


You may check out this Vertex Block Descent (ankachan.github.io)

And here is a promo video from Two Minute Papers Crazy 50,000,000 Point Bouncy Jelly Simulation! - YouTube

Ok, I did work out a basic “hello world” style setup for any one that is struggling to understand the instance workflow. This is just a monkey falling onto the ground. About as simple as it gets. I will say that the instancing part does seem very counter intuitive, so hopefully this is not set in stone.

The feeding of objects into the simulation feels natural, so is there any way the instancing could be done automatically or in a dedicated node? I can only imagine how hard the graph will be to read with a complex setup.

Doesn’t seem to be a way to upload blend files. So a screenshot will have to suffice…


Comparison test with blender rigidbody. Some 200 hundred monkeys, collision of monkeys is convexhull, cup collision is mesh in both.

Blender version took 17 seconds to compute:

Geonodes took 56 seconds to compute:


Has anyone worked out how to simulate objects that have their initial position not at the 0,0,0 world origin? You can copy the initial position of the object to a point, but then the center of mass is off.

Can’t see any way of setting the center of mass manually, and it is not showing in the spreadsheet, so I am out of ideas.

Correct, Center-of-Mass is currently missing. I’ve added a short-term TODO list on gitea.

1 Like