Offloading heavy geometry

In my opinion the idea of scalability is quite easy to understand for the “general” public, but it has to be introduced with an explanation, how managing heavy scenes without loading the whole dataset helps it.
Here is the disney island scene dataset for Moana cg animation. It takes 265 gigabytes on disk. How many workstations at disney have enough RAM to load the whole dataset at once? My bet is very few, including rendering servers. And datasets are probably getting only larger. Which means, a robust data handling automagic could be useful for blender.

Yeah, I downloaded that set last year. It’s completely insane! On the opening shot of the movie Clouds (on Disney +), the scene was over 75M polys (no instances) and I’ve been able to do it by using the proxy tool addon. Otherwise it was too heavy for my 32 cores 64 gigs RAM machine. It had 5 gigs of textures. I had to render everything on the farm. Yet, Cycles rendered each frame at 4.5k in less than 5 mins per frame. I want to push Cycles to its limits so I can evaluated future projects. We have a war movie coming later this year.

3 Likes

Matt Pharr has a wonderful series of blog posts on how he optimized pbrt to render this specific scene

Part 1, 2, 3, 4, 5

enjoy!

5 Likes

anyone did some tests with the new geonode modifier?
it’s possible to switch geo using a driver
#depsgraph.mode=='RENDER'

I wonder what’s the impact on ram

1 Like

Not same thing… but “mesh sequence cache” method works nice with particle system, geometry nodes and in new asset browser.
From my own test we have some memory save using this method…
Seems blender devs did some work in procedural alembic loading and they will extend dev to do the cycles directly load alembic from disk.

https://developer.blender.org/T79174

my workflow… maybe you can to do a addon for streamline this process :slight_smile: :

Not a big fan of this de-centralized .abc workflow, Imo every model needs to be stored within the .blend as hidden object/mesh data.

The entire film industry is based on de-centralized alembic (or USD) files. This way you can easily share asset for different scenes and if you modify one, it will update in all the files using it.

8 Likes

I understand… I like too of the centralized all in one .blend file workflow, but for big scenes this dont work… big files, big save times, very slow render export …

All anothers 3d app have this de-centralized workflow… vray -> vrmesh/abc… corona->corona proxy… arnold->standins … etc etc… maya/houdini/3dsmax -> alembic/usd etc… unrealengine -> fbx/alembic.

Almost all my vray scenes would be impossible without this de-centralized workflow

4 Likes

@Funnybob How important is it for your pipeline that Blender’s python matches the VFXplatform? Sticking with 3.7 as specified by the VFXplatform vs upgrading to the current 3.9 is being discussed in the task VFX Reference Platform 2021 Compatibility. If sticking to 3.7 is helpful, feedback is requested as for the reasons:

  • Do they use modules that aren’t compatible with newer Python versions?
  • Do they use Python modules which are part of VFX platform?
  • If using a newer Python would break their tool-chain - it would be useful to know in what way exactly.
1 Like

We are using 2.7 / 3.7 for the majority of our plugins, as a lot of them are used cross-application. Nuke, maya, and houdini adhere to the python versions in the reference platform, so for stability we try to keep our environment in line with that. Thanks for asking! We appreciate it!

1 Like

I am keenly following this topic, as it’s one of the constraints of using Blender for large architectural scenes. Perhaps the only real constraint left for wider adoption in the architecture/arch-viz industry of Blender is indeed lack of proper proxy workflows. I know Lodify helps some. I spent good 3 days or so converting a 3ds max model with vray proxies part by part into blender. Within blender, the viewport would be unresponsive to have all buildings on, so, i created manual proxys of the buildings, and the complex geometry was on only for rendering pruposes. However, our designs required to be able to see some detailed geometry, so we had to be smart with isolating the specific parts we needed.

1 Like

The creator fo the proxy tool add-on got in touch with me and he’s working on a new version. Like the one before, it generates a point cloud but this time the hires geo is saved on a separate file and the hires is loaded only at render time. It’s amazing. I tested it with my 27M poly geo and instead of taking 9 gigs or ram, my layout file only uses 77 megs! It loads so fast too! The scene is crazy light and I can move around without any slowing down. He’s working on making it work with collections. There’s also an issue where it doesn’t remember the transformations of the objects that are being converted into a point cloud. This is what her wrote me: Yes, that’s because the objects in the library file are not linked to the scene by default and for now I have no idea how to access a library file content. Maybe I will look into this if it’s really necessary.

So we’re almost there. Not ready for primetime yet but it’s coming.

7 Likes

Something relevant for this discussion?

https://developer.blender.org/rBb64f0fab068169a7c379be728aae8994eb893b18

1 Like

Speaking of Katana etc, do you have an opinion or use for Gaffer to assemble / render your scenes? Supports the standalone version of Cycles afaik

https://www.gafferhq.org/

Creating tons of extra external files just for some display optimization of one single .blend file?
what a mess

Gaffer doesn’t support Cycles (actually, nothing outside Blender supports it because there’s no fully implemented stand alone version) so I have no use for it but thanks for the suggestion. :slight_smile:

Haven’t checked on the progress of this recently so maybe not ready yet – but I’ll keep an eye on it

Well my friend, let me give you a real life example. Optimus Prime, on the first Transformers, had 10 levels of details (LOD). It was ridiculously heavy. It was impossible to show it at maximum resolution because no machine could handle it (at the time). You could only see it at rendering. Now imagine 5, 10, 15 transformers fighting, on their own planet, filled with insane details. The film industry is a totally different game. It’s simply impossible to load all that stuff in memory, for display. On Welcome to Marwen, Zemekis’ latest feature film, we needed 44 000 machines to render it and some of them needed 380 gigs of RAM. And it’s not like it’s Star Wars or a Marvel movie. So yeah, display optimisation is a must if we want Blender to shine in the VFX world.

11 Likes

You’re talking about big Studios workflow here, most blender users are working alone and will not benefit of such features if it’s exclusively designed for such huge budget workflows.

think about all freelancer out there using blender.
I just hope that there will be an option to keep everything centralized too. I don’t see why it need to be one way or the other. it can be both via an option in the interface

Medium size too. We’re already reaching the limits of Blender where I work and we need better solutions. Maybe you don’t need it, doesn’t mean it’s not needed. I don’t care about grease pencil because I have no use for it but it’s good that it’s there. Blender is developing all the tools needed for major projects. Alembic, USD, VDBs etc. What’s the point of developing them all if not to try to gain more space in the film industry?

9 Likes