Unified Simulation System Proposal

@TylerGubala

The thing with “logic nodes” is that they only make sense for some kinds of simulation. I’m currently focussing on the parts that are common to all kinds of simulations. That does not mean that we can’t have a logic system (whatever that means exactly) for the simulations that need it. For example, the particle system I’m developing has events, conditional execution etc. I intend to extend the proposed “node tree syntax” to allow users to use such nodes in the same node tree, but that is not part of the generic system. Not sure if that makes sense…

I think I do not fully understand your use case. It might help when you write a separate document that explains your use case in a concise way and link it in this thread.

You can append simulations and node groups.

These are actual screenshots of mockups. I made them in a branch of the functions branch, which I’ve only uploaded on Github currently.

I think “variable time steps” has two different meanings. You can change time steps to compromise between accuracy and computation time, or you can change time steps to change the speed of the simulation. Based on what you describe, you are probably only talking about the second meaning. Letting users change how much simulation time passes per frame should be quite easy. I’m quite sure that we will not allow negative time steps. Simulations should be simulated forwards and then played backward to achieve that effect.

I did not know that library, will check it in more detail. In my view, interdependent simulations are actually a single simulation with a solver that can incorporate multiple subsolvers. So they should fit into the current design without bigger issues (the difficulty is to actually implement such a solver).

This can be achieved by having a top level parameter in a node system, that controls different settings in the simulation. For example, you could build a node system that, depending on some boolean value, uses a low resolution model or a high resolution model in a rigid body simulation. We can probably make this more user friendly by providing good prebuild node groups (+ optionally a separate UI), but it does not need to be part of the fundamental design of the framework.

I’d say that this is mostly an implementation detail. There are many things we can do to allow for parallel simulations when actually implementing this system though. Most importantly, operations would have to specify what data they read/write in advance. This allows a scheduler to figure out what can be parallelized.

1 Like

Maybe not if you use a neural network to manage the results as acquisition from copied states … and then pass them on to a integrator like an AI module that can be specialized regarding the type of simulation… …

@YvesBodson I’m quite sure that we will not use a statistical model for this part of the process.

Regarding LLVM:

I see. Would be interesting to investigate how SPI got their LLVM OSL to be faster than their C++ shaders though. I still think it could be useful in the future to not use unrestriced C/C++ for simulation nodes but use only a subset of the language or a DSL. That would keep the door open for GPU evaluation (GLSL, SPIR-V, Metal, maybe ISPC on future Intel GPUs). Disney’s https://www.disneyanimation.com/technology/seexpr.html might also be worth a look.

1 Like

If used correctly, LLVM will probably provide the best run-time performance on the CPU. So it is not really surprising to me how compiled OSL can be faster than precompiled C++ shaders. To me, LLVM and GPU evaluation are not “the solution”, but they can certainly be part of the solution for sure.

For the rest of Blender, a simulation is just a function that runs on the CPU like any other. However, it should be possible to have State Objects that reference data on the GPU and Operations that, when executed, invoke some processing of that data on the GPU.

Btw, every node system is a DSL as well. For now I don’t see any reason for why one should not be able to create Operations with C/C++, Python, nodes or some other domain specific language.

1 Like

I’m thinking about graph optimization - removing redundant nodes, merging duplicates, etc. If we can reason about what’s inside the nodes and know that they don’t have side-effects, the runtime can perform those optimizations. If on the other hand, the nodes are black boxes that can do anything (write to files, change the scene graph, etc), then it’ll be close to impossible for the runtime to figure anything out, whatsoever.

For example, it we can isolate independent branches in a node graph, we can potentially evaluate those in parallel. We can only do this for nodes that are known to not be writing to the same memory, obviously. Since we can’t make any guarantees about black box script nodes, this optimization would fail with those nodes.

I was more thinking of possibly evaluating the entire node graph on the GPU, not individual nodes (see Cycles). In a mixed environment, it could end up so that GPU->CPU memory transfers otherwise become the bottleneck and cancel out the benefits of GPU processing altogether.

Maybe it’s just my Cycles-centered mind, but I think it’s a big advantage that we can run our shader graphs on a variety of devices at full speed, while still running all the expensive operations on the CPU as compiled C code and not some interpreted language.

I’m doing many of these optimizations already. While a “function node” itself is a black box, this does not mean it is allowed to do everything. I talked about this topic in my very first document about Everything Nodes (note that I wrote this more than a year ago and some aspects of it are not up-to-date anymore):

Similar constraints can be used for Operations. E.g. an operation is only allowed to modify the data that is passed into it.


We are talking about two different node systems right now: Simulation nodes and Function nodes. Simulation nodes are a DSL that allows users to schedule operations on state objects. Function nodes are a DSL that allow users to model data flow with inputs and outputs. Both can have different evaluation mechanisms.

Cycles only has “function nodes”. It makes a lot of sense to take an entire node tree of this kind and compile it to run natively on the CPU and/or GPU. I’m actually very interested in working on this topic. Things become a bit more tricky when you want to allow users to work with lists of data or other more complex data types like strings or meshes though.

For Simulation nodes, I’m not yet sure if it is necessary to take multiple of them and compile them down to a single function. Maybe it is. The GPU-CPU memory transfers can be reduced by storing references to data living on the GPU in the State Objects.

We can also compile a chain of low level Operation nodes into one function that runs on the GPU and run one Operation that is written in Python afterwards. Saying that every function and simulation node has to be able to run on the GPU would make many use cases much harder or even impossible to achieve.

2 Likes

I wrote another document about how I’m currently evaluating user-defined functions. This other thread can be used for questions and feedback.

3 Likes

Yes. Here is a hypothetical use case for supporting zero and negative time steps: Simulation of snow flakes and cloth being affected by wind and gravity (e.g. canvas dropping in air). For artistic purposes, you want to affect canvas time rate: first slow down forward time stepping, then keep it zero (immobile canvas), and then make it follow negative time stepping (causing physically wrong but artistically potentially wonderful effect), while keeping snow flakes simulation in constant forward time rate all the time. If it would be possible to support this kind of variable time stepping, it might allow interesting simulations in future. I think initialization on simulation system could be done via separate function call on first frame where simulation is started.

4 Likes

Say we wanted upbge to have a simulation block - is there some sort of way to loop the simulation system?

#to turn the sim system into a gameloop**

So is this still active and under development?
Is there simulation frameworks / libraries that are considered to create this unified simulation system?

Interactive Computer Graphics

they have “SPlisHSPlasH is an open-source library for the physically-based simulation of fluids”
and PositionBasedDynamics project “PositionBasedDynamics is a library for the physically-based simulation of rigid bodies, deformable solids and fluids.”

taichi-dev/taichi_elements: High-performance multi-material continuum physics engine in Taichi]
github . com /taichi-dev/taichi_elements

zenustech/zeno node system in github . com /zenustech/zeno

in github . com/ nepluno/
there is many interesting solutions: libwetcloth, libWetHair, pyasflip, lbfgsb-gpu

There is also this one developer who has posted own progress from selfmade simulation engine: Realtime GPU smoke simulation - Other Development Topics - Blender Developer Talk looks pretty interesting - could blender foundation contact to this developer if developer would be interested to join blender dev.?

And then there was also this Vadere called open source simulation framework for corwds.

+PhysBAM
+Chrono Project

Sorry if i wasted your time.