MiLiAN GPU Simulation framework. Framework available, integration with Blender?

Hello!
What is this about?

I want all this inside blender. Blenders amazing progress in recent years has inspired me to investigate this further, and I have many questions :slight_smile:

After posting in blender chat (node-physics-module) and reading through topics and posts regarding simulation and physics and nodes in blender, here is a list of questions I cannot seem to find the answers to.

My blender experience is limited, but my experience with it + huge amounts of data has not been encouraging so far, which contradicts the purpose and objective of my GPU-simulation framework. So beginner questions now:

    1. I code in C/C++ / CUDA. openCL is an option, but initially not relevant. Not worried about python, using that as well.
  1. A single GPU with 24GB will currently be able to process ~ 280 - 310 million fluid particles, possibly more if fewer attributes or mixed precision etc. Right now, I cannot see any host app dealing with this efficiently in the context of plugins or extensions. One can always display a proxy, but the data still has to move and if we wish to integrate well with native features … What do you think?

  2. Nodes: I can certainly see how breaking it all down to node operators would make sense, but again, the amount of data moving through the system would either require a massive modification of the existing node system (assumption based on experienced performance and stability), or a mirror on GPU for relevant nodes/operators to function off-host even wrt. node graph eval. Take it with a grain of salt, this estimate comes from a blender rookie.

  3. C++ api not present, python for data exchange. More or less dirty hacks aside, I am a bit worried that python will severely limit performance (threading, scheduling, data), unless, again, only instructions are shared and almost all blender logic wrt. preparing data, creating points etc. is on GPU (or otherwise rewritten to mediate).

I will do my R&D and see if I can figure out more, but I am hoping that some of you can point me in the right direction or straight up answer some of my concerns with constructive criticism or feedback, better yet, a good discussion, as I have seen quite a few of around here.

Apart from Houdini there is literally no application with native, fully integrated, well designed and performant (scalable) fluid simulation system out there, yet. Understandably so, but I like the idea of a free software like blender doing it :slight_smile: I have often heard that “blender is just not designed to do this”. Fine, but do you agree? And even if true, why keep it this way? I am superficially aware of efforts regarding nodes and such, but I see no specific info or concepts regarding targets and large scale simulations.

Thanks in advance!

19 Likes

Hi, I am not sure what the actual questions are.

Do you already have a GPU framework in mind and seeking what the best ways of integration would be? Or are you looking for implementing this new framework?
I couldn’t find an information about MiLiAN framework which is about fluids.

I already have it (Yannik F - VFX Artist, Screenwriter & Author) and am thinking about integration with blender. The questions, as stated, are mainly concerned with limitations in terms of blenders ability to handle large data sets, and any other limitations an experienced dev can think of (interfacing with blender, should it be core code or a python extension etc …)

5 Likes

Surely some work needs to be done to make Blender handle complex setups and geometry better, but I don’t really see it as a stopper for bringing improvements to different areas of Blender. You start small, and then go from there, solving issues as you discover them.

For such integration of solver the main questions to me are on a design levels: how does it fit into Blender systems? How does it interact with animation system? How does it interact with geometry nodes? How much artistic control does the system allow?

There is also technical aspect: we can not really have functionality with is only available on GPU. That could add extra constraints on the framework and compute backends and such.

The framework is designed with vfx-artists in mind, so control is paramount, but performance is very important as well. With a lot of work it could be a stand-alone thing, but why, when blender could benefit anyways? It is rather easy to have a CPU variant (fallback or total), most of the data structures map equally well to the CPU and a reasonably efficient CPU version should be possible within days, hard core optimizations ignored. I do expect a rewrite for a full integration anyways, but I can do that.

The solvers are agnostic to framework or environment, they will do just fine on any architecture if some basic functionality is given to process point clouds and voxels.

The thing is, getting started is hard if the outcome is by definition (or design) probably less performant than worth it. So perhaps it would make sense to improve blender, where needed, AND to come up with a solid concept for the whole simulation paradigm, before taking an existing framework and smashing it into blender :slight_smile:

Integration with animation systems etc: Same question here, but as a blender novice, I cannot answer. I hope that the systems are designed such that they have clearly defined ins and outs, allowing me to grab points, time and fields and equally modify them. If that is given, all is well, the only remaining question is again about performance and device vs host workload. With modern CPUs and their huge caches, one can also expect good CPU performance (compensating for lack of memory throughput by utilizing more caching and thus more memory capacity).

So a plugin then, because no GPU-only code in blender core. If so, I still need to understand more about the “correct” approach for such a project. Python? Other options (access to core etc)?

4 Likes

I’m interested to know more about this framework. Is it opensource? Is it available somewhere for me to check it out?

think you’re putting the cart before the horse a little bit, i’d probably start with a survey of simulation frameworks already out there, see what their license, their strengths and weakness are before re-inventing the wheel, we currently have a mantaflow for simulation, could we add GPU support to that instead?

If you have already written your framework, why should blender choose yours ? We’re currently stuck with manta-flow which was integrated by a single dev, which then disappeared, and now no-one is seemingly willing to maintain it, i’d very much would like to not repeat that experience. How will things be different if we go with your framework?

4 Likes

No, it is proprietary at this point. It was first built for testing all sorts of solvers, some of which were then developed as plugins (for RealFlow for example). I am thinking about releasing the framework itself, but some of the solvers, like sandy, I am not sure yet. Got some thinking to do.

2 Likes

I understand. I can tell you that fluid simulation (especially particle based) is not only a passion but also my bread and butter. So I am well aware of MANY open source and closed source projects. The thing is, most of them will operate in one of three categories, solver wise: Lagrangian (particles only), hybrid (like FLIP etc.) or purely eulerian (almost any smoke solver).

Simulating free surface flow or volumes really comes down to one of those or variants. If you know one, you know most of all. They are playing the same game with the same or similar math. And you will also see that they share very similar benefits and/or weaknesses, regardless of who made them :slight_smile:

The key differences are in implementation, efficiency, focus on control vs. physically correct behavior and other things. Mantaflow is (last I checked) based on the LBM method.

edit: Mantaflow is actually using a FLIP solver for liquids, so I made a mistake there. LBM solver: Palabos (open source), introduced as “palander” for blender. I confused the two, but the following argument is still valid.

If you need a capable LBM framework on the GPU in openCL, check out ProjectPhysx by Moritz L. I am not all about LBM. It is awesome in science and “could” be great for creative applications, but thinking workflow and ease of use, particles are often way more intuitive to work with than levelsets and voxels, especially for grains and liquids.

I am aware of mantaflow and I am not a fan (as a user). An LBM-solver in Blender is great, but as a product or complete feature, it is lacking. I am not judging the maker(s), just saying, this is not what I have in mind.

Nobody choses me at this point. I chose to look into blender, or I do not. Of course I would appreciate help and support, but I am not intruding or forcing myself / my code into blender. All questions so far are very justified. Walking away from my own projects would only happen if time or health calls for it. Avoiding this starts with good questions :slight_smile:

Now … being a potential chosen one: What do you need? I think that is also a problem, since I cannot find specific, thoroughly laid out concepts for just that. Still searching.

13 Likes

So, the biggest problem with this is that it’s solely Cuda-reliant. One of Blender’s big points is that it’s hardware agnostic for everyone. It would be unwise to lock this caliber of system behind a CUDA wall. It would have to be built off something all vendors can use. You may get away with HIP and CUDA or seeing if you can use ZLUDA for Radeon, but that leaves ARC and Apple systems out in the cold…with that said, Opencl may not work in that aspect because as far as i know, apple has dropped openCL support, so you’d have to figure out something there. Maybe some Metal implementation can be done?
and again. that leaves ARC. i don’t know what OneAPI supports, but thats whats in blender right now for arc cards.

I am only 1 dev. Cuda + x86 will be my number 1 choice for a first prototype because I must keep cost low, and I can with cuda because I know it well. Even with inflated nvidia prices I get more compute for less money vs. Apple, AMD doing okay as well.

I do not own a mac and likely never will buy one “just to play with metal”, as it would have to be recent (even M1 will still run openCL but with limitations). Since openCL is Apple initiated and then dropped like hot potato (a pattern), I have zero interest in Apples stack and politics, although some of their technology and implementations are impressive and Metal is as unlikely to die as Cuda … or is it? :upside_down_face:

I realize that it is not about my preferences and all users should be respected in blender context, yet Apple silicon has to be last on my list of supported platforms. They have done their best to enforce such decisions for non-native devs, especially in the GPU computing domain. Unless I can cooperate with others who can own that Apple part, I will go through windows and then linux, then openCL or others if I can.

Realistically, I cannot support 3 different stacks on 3-4 architectures (how about arm in general?) on my own as a side project, even full time. I absolutely prefer quality and robust builds/products over “all is supported and all have unique flaws and none can be 100% tested”. I like to iterate quickly where possible and take my time where needed.

I am thinking production scale potential, which already puts a question mark on any mac for cost reasons alone.

Things can change if bills are paid and economic concerns resolved, but the “all platforms guaranteed” promis today is unreasonable. Quality > quantity…and I must learn more about blender itself first :no_mouth:

7 Likes

True, but it’s not like Blender users don’t have options, like Flip Fluids for example. Now sure, it can’t match Houdini, but by all reports and videos it does do a pretty good job.
Then there is Ember Gen, which while not super cheap, does at least have a permanent license option.

What is really missing and the first thing I think of when I see a ‘Simulation’ headline, is Cloth and Hair simulation. Something that has all those same well designed, fast, realistic, artist controllable, etc type features either fully within Blender or worse case, a fairly cheap/permanent license app.

Unfortunately that just doesn’t seem to happen, most are only interested in running water or blowing things up…

Still, very nice example videos, if nothing else, adding that to Blender would seem to put any fluid simulation needs to rest. At which point maybe someone could then look at Cloth/Hair.

1 Like

Good suggestion about cloth and hair. Ironically, a point-based simulation framework would already have all it needs to do both. There are different approaches to hair and strand simulation, but particles are a very efficient and rather easy to control way of doing even vast amounts of intersection free hair. Will think about it.

And where would man be without water or blowing things up^^

Sandy-Solver is only focused on grains. Sand, powder, fracturing, with some extension compressible stuff like snow etc. I guess it would be among the first solvers to go in before all the others, as I specifically see no performant solution for that type of material. I am aware of molecular and its history though.

5 Likes

So have you considered developing it for use as a plugin, I mean a bridge plugin considering the limitations of the Python API. This would omit everything (or the vast majority) of issues that bother others or bother you. Also the GPL agreement and private code issues are solved

1 Like

That is 50% of the question :slight_smile: Plugin or core, either way, how? Or do you mean a stand-alone and then a plugin to communicate with blender? Please elaborate.

something like this maybe? The Grove 2.0 - The Grove , see the part about portability. Although it says it is in rust, I am mainly referring to the approach.

2 Likes

I think this leaves me where I started. Thanks for the link thou, looks like a great system. My code could potentially interface with any host. Let me illustrate.

If “you” want a tight integration, blender must provide (or get):

  1. A capable, scalable particle system with no fear of HUGE point clouds (includes geometry)
  2. A capable, scalable volume system with no fear of HUGE voxel grids (sparse or dense, irrelevant)
  3. Tools for processing such data in creative ways without crashing or freezing or dos of any kind.
  4. Most importantly: A way for me to read and write this data via an api (or directly, if core code). Core implementation would potentially avoid massive redundancies, data wise.

If blender does NOT have this, then we are on a plugin path:

a. I must use my own particle system
b. I must use my own volume system
c. I must build my own tools for post processing
d. I still need that api, but now I only share data relevant for visualization and all operations are in my code, CPU or GPU. More memory efficient, but this would also mean that file export (baking) does not go through blender. It would also, by definition, limit the degree of integration with blender.

Exchanging data is my main concern because I can see that blender does not “want” me to do 300mil “particles” anywhere. So yes, a plugin seems to be the way to go. But again, I lack experience as blender dev and thus ask for advice and potential pitfalls.

It is also possible that the time is just not right, regardless of past achievements in development. Maxons Cinema 4D took about ~15 years to even get started on a unified simulation system and nodes + a serious update of their core. For similar reasons. Even today, C4Ds USD support is lacking and performance with point clouds and particles is years behind others. If blender as a product or community has no true desire to focus (and I mean focus) on this area, then plugins will always be the “solution”. Not a deal breaker, but it changes the game considerably, I think (from a long term dev-perspective).

8 Likes

A significant pitfall is not only the lack of a C++ API (which of course you’ve already noted), but that the Blender Foundation is categorically opposed to offering one, full stop. This has been stated on more than one occasion in the past, and I’ve seen no public discussion of this changing (nor even a desire for it to change).

(I agree that such a tool would be very advantageous for your goal; I’m just offering an FYI to the reality.)

2 Likes

Thank you, yes I have read about such requests and their denial a lot in the past years, but I did not understand it as a complete “NEVER”. Getting closer to that now :wink: Not a problem, if the python api is equally performant. Which it clearly is not, based on my research so far. So either the goal is to have stuff all in core, or simply not to care. Or it is “simply” a matter of protecting the devs from complete overload?

Cannot pretend to know better at this point, but the reasoning behind such a decision would be great. After all, blender is written in C/C++, if I am not mistaken.

A representative thread of a discussion that included both users and Blender developers is here. The whole thread isn’t about that, but a significant part of it is. Note that the situation isn’t quite as thorn-neverwake describes: if you look at this post you will see support from a key Blender developer for the general idea, with the main reason cited for not having it being that it is a lot of work to do right.

4 Likes