I am currently evaluating Blender for a subsidiary of one of the largest automative manufacturers in Germany, who want to change their 3D visualization pipeline from Maya to Blender, with Eevee as output engine. They are dealing with huge datasets which they try to handle with collections, linked libraries etc.
Car models from CAD software are obviously rather dense, even if you export them already simplified.This becomes a problem when trying to display it in the viewport, because Blender uses a lot of RAM to display the geometry.
So far they were using Maya, which was able to handle their car configurations.
However, with Blender they are running into memory limitations.
Comparing memory usage
In order to compare the two programs and how they handle geometry, they have created a test configuration, which uses the same amount of geometry in Bender and in Maya.
The company has simulated a case with 517 files, containing 5 subdivided Suzannes with 62.976 triangles each.
All in all that is 162.792.960 triangles.
Maya
In Maya they import the objects from the 517 .mb files as references, which as far as I know is more or less the same as linked libraries in Blender. However, at first they do not display the 2585 Suzannes in the Viewport yet, they are hidden.
This takes about 12.2 GB of memory.
In the next part of the test they display all the objects in the viewport.
The RAM usage spikes up to about 25GB while Maya is “uploading” the objects to the viewport. After that however the memory drops to about 14GB.
Blender
For Blender they wrote a script that saves their basic test scene with the 5 Suzannes to 517 blendfiles.
In their file they are also testing the impact of nested parenting, but according to my tests this doesn’t impact memory usage much.
I wrote a script that links the 2585 subdivided Suzannes from the 517 blendfiles to an excluded collection in a new scene, so that the viewport does not have to display them. Once all libraries are linked Blender uses 10.4GB of RAM, this seems even better than in Maya.
Now I instance the collection containing all the linked libraries into the active scene. This takes a while, but once all 2585 Suzannes are visible in the viewport Blender takes, according to its own stats, 23.3GB RAM and 8GB of VRam. In the system monitor (Linux) it says 31.3GB.
If I save and reopen the scene now, memory usage is the same. It also doesn’t matter if the geometry is spread over many objects or just a few. The more triangles, the more memory is needed.
Testfiles
Here are the two files for testing:
https://www.dropbox.com/s/azrby05y47xh5p9/TestScene.blend?dl=0
https://www.dropbox.com/s/01ck37pb88monsz/import.blend?dl=0
The first one is “TestScene.blend”
It contains the 5 subdivided Suzannes and a script that will write 517 copies of itself to the folder “TestFiles”.
Once those files have been written you can open “import.blend”.
When you run the script it will open a UI in the sidebar of the 3D viewport on the left, where you could adjust the path to the folder with the 517 if needed. The field “Pattern” could be used if you wanted to import collections with a different name. The default of “Test” is enough for this test though.
When you click “Car import” it will link all the collections from “TestFiles” into a new scene.
Once the import is ready you can hit Shift-A in the viewport and add the collection “new_import”. This will make the Suzannes visible in the viewport and you can observe the RAM filling up.
So?
An entire car configuration has far more triangles than this testscene, so they are running into hardware limitations.
The question is, can we do something about that in Blender? Is it possible to have a way to use libraries more memory efficient? Maybe turn of some animation features or so? Like, tag a library on import as non-animatable or so in order to cut down the needed geometry? Or improve memory usage for geometry in general?