Offloading heavy geometry

Renderman, Arnold Guerilla, Mental Ray and Maya (probably Mentra too) all have the ability to offload geometry and use a “proxy” versions instead. By proxy, I’m not talking about Blender’s proxy. It’s about having the ability to offload heavy geometry out of the scene and having it replaced by just an origin, a bonding box, a low res geo or a point cloud, interchangeable on the fly. The high res version is loaded only at rendering time which free up the memory for the viewport.

I saw different addons that will allow you to generate proxies, like proxyfy, but they keep the highres version of the model in the scene, which defeats the purpose. Proxytools is buggy but awesome for its ability to generate point clouds. It makes a faster interaction in the viewport but unfortunately still has all the geo in memory.

When you save a file where proxies were generated from Proxy Tool, you can see that it creates three versions of the geometry. The one called proxy is a collection that includes the low res and the hi res version. If I link directly to the low, I only get the point cloud, which is what I want. Now to get this à la Arnold, I would need a pre-render script that would switch the lo for the hi version of the object but only for rendering, without affecting my current file.

That was the topic of my latest publication on my Blender Youtube channel.

You can skip to 8:55 to better understand what I’m trying to achieve.

So, is it possible to switch a geometry for another one only at render time without affecting the currently opened file?

Happy new year to you all!


Here’s a link to a file where a proxy point cloud has been generated by proxy tool:

and one for the add-on:


Yes, it´s possible, you can do that with linked instanced collections.

Just link a collection, instance it as many times as you want, the original geometry will be loaded in memory just once as an instance, and it will be reused as many times as needed.

You can also have two sets of instanced collections, hi-res and lo-res, the. Enable a container collection with hi-res just for render, and a container collection for lo-res just for viewport, that’s it.

Linked collections don’t fix the issue. You still need to load the geometry in memory, linked or not. Addons like Proxify will create LODs but that makes it worst as now you have more geometry in memory. What I need is a point cloud or a bounding box in the viewport and the hires to be loaded only by the renderer. Please take a look at my clip à 8:55. It’s better explained there. You will see how Arnold works. It only displays a bounding box. Takes no memory at all, but at render time it loads the hires. You never see the hires in the viewport (well you can if you want but the idea is not to show it).


It’s not currently supported.

There are Cycles Alembic and USD procedurals under development which should eventually make this possible.


Oh! answer from the master himself! :slight_smile:

I just saw this. could it help?

Happy new year to every one at the institute!

@Funnybob one important thing if you follow the idea of instanced collections is that if you keep the hi-res collection disabled with the little screen AFAIK it’s not loaded in memory, however at some point it will have to be loaded since the Blender exporter to Cycles needs to read it, so I’m afraid that it does not do what you want, that just free up the memory in the scene and it loads what’s needed just at render time, that would be great, and I hope we can have such important improvements in cycles soon :slight_smile:

I hope it will happen soon. I’m the CG sup at Real by Fake, a VFX company. We ditched Maya for Blender. We are developing an extraordinary pipeline to integrate Blender with Houdini, Nuke, fTrack and the render farm. We just finished a movie and we’re starting another one soon, a much bigger one and I hope I won’t reach Cycles limits because of this issue. Actually, Cycles is not the problem. It’s handling huge scenes that is the problem. I’ve been in the industry for more than 20 years, working on Hollywood blockbuster with Maya/Houdini/renderman pipelines. Managing huge scenes is what it’s all about. Katana is the way to go but Cycles stand alone is not there yet. The Foundry told me that they are working to support Cycles. I don’t know if that’s the saleswoman’s pitch or if it’s real though. We couldn’t do Jungle Book or Coco in Blender. Not because the software or the renderer are not good enough. I have full confidence in Blender. It’s all about managing the enormous files we have to deal with in the film industry.

We want to become the biggest Blender film VFX company. That’s our goal! :slight_smile:


That’s cool.

Having such a size you could dedicate one or two developers to overcome this problem, like Tangent Animation does, Stefan Werner does not work in the Blender Institute, he works for Tangent, some of the new Cycles features comes from him, having another studio providing this kind of improvements could be great, could be very beneficial to you and to everyone, and I’m sure if the devs need guidance they will receive help and guidance from main devs :slight_smile:


With pre/post-render callbacks you could make something work for a particular studio, it’s just not native support and wouldn’t integrate cleanly.

1 Like

Thanks Brecht! If you want to laugh a little bit, check out my latest clip. :slight_smile:


Somebody from the forums on sent me this. So I have a solution. I asked our developper to modify it so that it would detect the name of the object (based on the proxify add-on) and change it for the high res version, so foo lo would be switch to foo hi. He decided to push this even more. We’ll have, in the properties, a way to turn on an off many display options (origin only, bounding box, point cloud, LOD or full geo). It will be absolutely awesome. Once it’s done, how can we submit this in case the institute would like to integrate it into Blender?

Also, I saw that when you have very heavy geometry on screen (and sill have RAM and VRAM available), the entire interface become sluggish. I can understand the slow down for the viewport but what I don’t understand is why the rest of the interface is affected. Even doing things like “file, save as” would take about 30 secs just to get the file menu to pop up. Could it be because the entire interface is drawn by open GL and it needs to evaluate everything on screen no matter what? Is there a way you could tell Blender to only refresh the viewport if needed?

Here’s the script:

import bpy
from bpy.types import Mesh, Scene

original_geometry: Mesh = None

def prepare_render(scene: Scene, context: object) -> None:
global original_geometry

# append geometry from another blend file
with"C:\\Users\\user\\Documents\\boxy.blend", link=False, relative=False) as (data_from, data_to):
    data_to.meshes = [name for name in data_from.meshes if name == "Sphere"]

# get object
cube =["Cube"]
# store geometry to original_geometry
original_geometry =
# swap geometry =["Sphere"]

def restore_render(scene: Scene, context: object) -> None:
global original_geometry

# get object
cube =["Cube"]
# get current geometry to delete
geometry_to_delete =
# swap geometry = original_geometry
# delete geometry from current blend file, do_unlink=True, do_id_user=True, do_ui_user=True)

init handler

1 Like

Hey i’m the guy who created Proxify/Lodify

Great idea, i may implement such offloading option


I was hoping for that option in Lodify^^
Feel free to have a look at the Proxy Tools addon. I didn’t get the offloading part to work as I wanted to, though.

The problem with the “point cloud” display is that blender currently doesn’t display collection instances which only consist of vertices correctly.

I think that for something like this to be implemented it would have to be one in C/C++ and be integrated into code.

Being Python it could suffer from performance problems, also, are you sure it’s doing what you wanted?

I mean, how can Blender feed the object to Cycles if the object it’s not loaded into the blend file and Cycles don’t have a “source HD” file where it can get the actual object?

Things like this can help in performance in viewport and even in file size, but I’m not sure it would help at all in render time, I mean even when you don’t see the actual object, before Cycles gets the scene that object needs to be loaded, so the amount of ram eated by Blender itself would be the same, so in the end you are not really saving memory when you render the scene, just while you work.

Not saying is a bad thing, just that I’m not sure if it’s what you asked for.

I may have misunderstood what you need :slight_smile:

Isn’t linked data do the same? Here is just an example –
Scene was assembled with linked data. About 200-250 bil tri’s. And you can do with linked data almost anything, it works with everything… Probably I just didn’t understand point of your idea.

P.S. Scene was accomplished on iMac 2017 with 3,5GHz quad-core i5, Radeon Pro 575 4GB and 64Gb RAM. But scene had took only about 35-40GB for render.

The swap is done before rendering start. So on screen on only see a box or a point cloud but Cycles will get the whole thing. Imagine you are doing Lion King. You have Simba in FG but the BG is composed of thousands of plants and rock, super heavy stuff. So heavy that you won’t be able to move your camera around. So that’s the only solution. You can keep the lion hires but if you don’t need to see the BG hires, no point in showing it. This is how it works in the film industry. That’s how Katana works. And if you don’t use Katana, you can do it directly in Arnold or Renderman. It will not make the rendering faster. It’s just to free the user’s memory and make the heavy scenes manageable. The script I provided earlier does exactly that. You have a cube on screen but it will render a sphere. It’s a proof of concept.


This scene seems to be a heavy of there are not that many different objects in it. They are all instances so even if you make a million copies of a tree, it’s still going to take the same amount of memory, both at rendering and in the viewport. In my example, I have 106 M polys but all individual ones, no instances. Take you scene and convert all your instances into copies instead and I can garantie you that you won’t be able to move around your scene, especially if you only have 5GB of VRAM. :slight_smile:

How can they be individual if you create proxy and duplicate it? My point is that this script just do the same that linked data does. And of course you can turn on “Display as bounds” for anything and go for 1000 trillion polys and probably more on render. There is only one difference between this proxy script and linked files – point cloud view for models.
Also you can turn off viewport display (don’t confuse with hiding) for parts that you already put in place. They will appear in render, but they will not occupy VRAM.
Or I am missing something?

Nop, even if objects (or collections) are linked, they will still be loaded in memory. The file size will be smaller but all the geo will still load. And even if you turn off the visibility in the viewport, it will free the VRAM but but the RAM.