Offloading heavy geometry

I hope it will happen soon. I’m the CG sup at Real by Fake, a VFX company. We ditched Maya for Blender. We are developing an extraordinary pipeline to integrate Blender with Houdini, Nuke, fTrack and the render farm. We just finished a movie and we’re starting another one soon, a much bigger one and I hope I won’t reach Cycles limits because of this issue. Actually, Cycles is not the problem. It’s handling huge scenes that is the problem. I’ve been in the industry for more than 20 years, working on Hollywood blockbuster with Maya/Houdini/renderman pipelines. Managing huge scenes is what it’s all about. Katana is the way to go but Cycles stand alone is not there yet. The Foundry told me that they are working to support Cycles. I don’t know if that’s the saleswoman’s pitch or if it’s real though. We couldn’t do Jungle Book or Coco in Blender. Not because the software or the renderer are not good enough. I have full confidence in Blender. It’s all about managing the enormous files we have to deal with in the film industry.

We want to become the biggest Blender film VFX company. That’s our goal! :slight_smile:

13 Likes

That’s cool.

Having such a size you could dedicate one or two developers to overcome this problem, like Tangent Animation does, Stefan Werner does not work in the Blender Institute, he works for Tangent, some of the new Cycles features comes from him, having another studio providing this kind of improvements could be great, could be very beneficial to you and to everyone, and I’m sure if the devs need guidance they will receive help and guidance from main devs :slight_smile:

5 Likes

With pre/post-render callbacks you could make something work for a particular studio, it’s just not native support and wouldn’t integrate cleanly.

1 Like

Thanks Brecht! If you want to laugh a little bit, check out my latest clip. :slight_smile:

6 Likes

Somebody from the forums on Blenderartists.org sent me this. So I have a solution. I asked our developper to modify it so that it would detect the name of the object (based on the proxify add-on) and change it for the high res version, so foo lo would be switch to foo hi. He decided to push this even more. We’ll have, in the properties, a way to turn on an off many display options (origin only, bounding box, point cloud, LOD or full geo). It will be absolutely awesome. Once it’s done, how can we submit this in case the institute would like to integrate it into Blender?

Also, I saw that when you have very heavy geometry on screen (and sill have RAM and VRAM available), the entire interface become sluggish. I can understand the slow down for the viewport but what I don’t understand is why the rest of the interface is affected. Even doing things like “file, save as” would take about 30 secs just to get the file menu to pop up. Could it be because the entire interface is drawn by open GL and it needs to evaluate everything on screen no matter what? Is there a way you could tell Blender to only refresh the viewport if needed?

Here’s the script:

import bpy
from bpy.types import Mesh, Scene

original_geometry: Mesh = None

def prepare_render(scene: Scene, context: object) -> None:
global original_geometry

# append geometry from another blend file
# https://docs.blender.org/api/current/bpy.types.BlendDataLibraries.html#bpy.types.BlendDataLibraries.load
with bpy.data.libraries.load("C:\\Users\\user\\Documents\\boxy.blend", link=False, relative=False) as (data_from, data_to):
    data_to.meshes = [name for name in data_from.meshes if name == "Sphere"]

# get object
cube = bpy.data.objects["Cube"]
# store geometry to original_geometry
original_geometry = cube.data
# swap geometry
cube.data = bpy.data.meshes["Sphere"]

def restore_render(scene: Scene, context: object) -> None:
global original_geometry

# get object
cube = bpy.data.objects["Cube"]
# get current geometry to delete
geometry_to_delete = cube.data
# swap geometry
cube.data = original_geometry
# delete geometry from current blend file
bpy.data.meshes.remove(geometry_to_delete, do_unlink=True, do_id_user=True, do_ui_user=True)

init handler

bpy.app.handlers.render_init.append(prepare_render)
bpy.app.handlers.render_cancel.append(restore_render)
bpy.app.handlers.render_complete.append(restore_render)

1 Like

Hey i’m the guy who created Proxify/Lodify

Great idea, i may implement such offloading option

6 Likes

I was hoping for that option in Lodify^^
Feel free to have a look at the Proxy Tools addon. I didn’t get the offloading part to work as I wanted to, though.

The problem with the “point cloud” display is that blender currently doesn’t display collection instances which only consist of vertices correctly.

I think that for something like this to be implemented it would have to be one in C/C++ and be integrated into code.

Being Python it could suffer from performance problems, also, are you sure it’s doing what you wanted?

I mean, how can Blender feed the object to Cycles if the object it’s not loaded into the blend file and Cycles don’t have a “source HD” file where it can get the actual object?

Things like this can help in performance in viewport and even in file size, but I’m not sure it would help at all in render time, I mean even when you don’t see the actual object, before Cycles gets the scene that object needs to be loaded, so the amount of ram eated by Blender itself would be the same, so in the end you are not really saving memory when you render the scene, just while you work.

Not saying is a bad thing, just that I’m not sure if it’s what you asked for.

I may have misunderstood what you need :slight_smile:

Isn’t linked data do the same? Here is just an example – https://www.artstation.com/contests/nvidia-metropia-2042/challenges/58/submissions/41667
Scene was assembled with linked data. About 200-250 bil tri’s. And you can do with linked data almost anything, it works with everything… Probably I just didn’t understand point of your idea.

P.S. Scene was accomplished on iMac 2017 with 3,5GHz quad-core i5, Radeon Pro 575 4GB and 64Gb RAM. But scene had took only about 35-40GB for render.

The swap is done before rendering start. So on screen on only see a box or a point cloud but Cycles will get the whole thing. Imagine you are doing Lion King. You have Simba in FG but the BG is composed of thousands of plants and rock, super heavy stuff. So heavy that you won’t be able to move your camera around. So that’s the only solution. You can keep the lion hires but if you don’t need to see the BG hires, no point in showing it. This is how it works in the film industry. That’s how Katana works. And if you don’t use Katana, you can do it directly in Arnold or Renderman. It will not make the rendering faster. It’s just to free the user’s memory and make the heavy scenes manageable. The script I provided earlier does exactly that. You have a cube on screen but it will render a sphere. It’s a proof of concept.

2 Likes

This scene seems to be a heavy of there are not that many different objects in it. They are all instances so even if you make a million copies of a tree, it’s still going to take the same amount of memory, both at rendering and in the viewport. In my example, I have 106 M polys but all individual ones, no instances. Take you scene and convert all your instances into copies instead and I can garantie you that you won’t be able to move around your scene, especially if you only have 5GB of VRAM. :slight_smile:

How can they be individual if you create proxy and duplicate it? My point is that this script just do the same that linked data does. And of course you can turn on “Display as bounds” for anything and go for 1000 trillion polys and probably more on render. There is only one difference between this proxy script and linked files – point cloud view for models.
Also you can turn off viewport display (don’t confuse with hiding) for parts that you already put in place. They will appear in render, but they will not occupy VRAM.
Or I am missing something?

Nop, even if objects (or collections) are linked, they will still be loaded in memory. The file size will be smaller but all the geo will still load. And even if you turn off the visibility in the viewport, it will free the VRAM but but the RAM.

Am I really don’t get you, or I see different numbers? – https://drive.google.com/file/d/1bALCwbTy7IujkREL4BESFc1e5QEUCxpA/view?usp=sharing

Oh! That’s interesting! I tried to turn off the visibility and it didn’t flush the memory but disabling does. Nice. But if you disable, you don’t see anything at all. I still need to see something, a bounding box or a point cloud.

Turning off visibility equals hide object. Just editorial tool. Disabling in viewport different thing. In production when you assemble large scenes, like cities or vast indoors, you always use layout as guidance. Our studio uses different software but principal is common for any software.
Anyway you have to approve basic shapes and concept with client, which requires layout and concept art. So there is always layout. And, more important, layout just a low-poly version of the scene and it’s much better than wireframe boundaries or point cloud.
Also, you can do two versions of layout – basic shapes (for start) and more detailed layout assembled with links. That way you can create props and just re-link it to layout. And it will be placed exactly in right place. Which is convenient because you don’t have to search in scene exact place for each props. And when some parts of your scene is done – just disable them for viewport.
You can see approximate pipeline in my work, to which was link in my first post.

P.S. And, of course, you can assemble props or parts of a scene with linked data, and then link them to layout.
P.S.S. Also, you can disable for viewport all details that not important in prop file. After linking they won’t be loaded for viewport. So, for example, building with sculpted decor that has over 10 million polys will be just basic shapes. But all details will be loaded for render. That way you really can go to trillions in polycount.

And related to your question

So, is it possible to switch a geometry for another one only at render time without affecting the currently opened file?

Yes. You just have to create viewport geometry in addition. Then you need to disable original geometry for viewport, and low-poly geometry for render. That way when you link your prop into scene it will consume memory only for low-poly basic shapes. That’s it. You have low-poly viewport model to operate, and when you start render blender will use hi-poly models.
And there is an easy way to do low-poly – decimate modifier. Just duplicate complex parts of a model, decimate them to low-poly (simple collapse will do). Then use ALT+C (convert to), choose Mesh - this will apply modifier for each model without joining them. You even can link hi-poly models to low poly and turn them off in viewport and just operate with low-poly.

Note that for disabled objects, the original mesh data is still always loaded into memory. Where you save memory is by avoiding modifier evaluation. For example when most memory comes from a subdivision surface modifier or a modifier that loads the mesh from an Alembic file. if you have a high poly mesh without modifiers, it won’t help as much.

Yep. I tried with my 27M poly model. It takes 9gig or memory. If I disable the visibility it still takes 2.9 gig.

1 Like

@brecht meshSequenceCache modifier with viewport turn off is fine to offloading meshes… :slight_smile:

can you add some option to flip axis ?? (for alembic exported from other softwares)
and… seem material ids assign has some problem: