Offloading heavy geometry

Often they are on external network drives to boot. :stuck_out_tongue:

Wasn’t that clear from the opening post? edit: never mind, it was post number 8. I misrecalled.

There’s no reason at all to force you to do anything. That’s not the point here. If you want to keep everything in your scene, then just do it but the day you’ll get into something bigger than what your computer can handle, then you’ll be happy to have the option. :slight_smile:

6 Likes

I think there’s a misunderstanding between the offloading of the geometry by itself, and it’s display method here

in such example above, a single archviz artist working on some heavy scene,
the artist OpenGl viewport starts to become extremely laggy, he want to display some objects into point clouds/proxy.

as far as I understand What you/some are suggesting :
- if the user want to use such display method he will be forced to offload every objects into external files.

what I’m suggesting :
-let user decide if he want to offload or keep everything internally stored (loaded in VRAM only on final render )

1 Like

.blend files, IIRC, are basically glorified memory dumps. Or at least they used to be.

I think they are more like database for anything blender wants to store. Even blender settings are stored in blend files somewhere in user appdata. There is a difference between memory dump and database. Memory dump is loaded as is and it can’t be read partially. Database can be read piece by piece.

One thing I haven’t seen discussed here:
I am reasonably sure certain formats like vr*yProxy save a pre-built BVH-Tree inside the file. This allows to load gigabytes directly into the renderer with only seconds of pre-processing.
To my knowledge cycles-viewport has dynamic bvh (so the tech is here), its only the main renderer that throws everything in the same bag, which plagues us for years now with the scene build-time being sometimes on par with rendertime… (with only little gains in speed for having a unified bvh)

Anyway - if blender ever gets a own solution for offloading, I really hope they implement this.
One side effect is, that you can use different tree depths of the bvh to set viewport display level of detail when showing only bounding boxes.

obviously this only helps for static content, but I assume that is the majority of heavy stuff

edit: i guess also that we don’t yet have reusable bvh data between frames is that it is not so easy to figure out what has changed and what not (depsgraph). For a static asset file format that would be at least simple to figure out…

1 Like

Yeah but I mean I don’t think blender loads data from a blend file partially when you open a blend file. What you say applies to linking or appending a datablock and it’s effective hierarchical children.

I don’t see at all how this contradicts what Funnybob wants?
The gist is: If your computer can handle it, have it all in there at once. If it can’t, give options, which at no point you are forced to actually use, to offload stuff for more efficient viewport display.
Like, the only way you’d be forced here is in terms of your computer not being able to handle it otherwise. No dev’s gonna force that upon you.

No dev’s gonna force that upon you.

Yes, forcing the user to make extra external files for using a handy display solution.
That’s forcing a data-storage workflow upon users that want to use the newly proposed display solutions (point cloud/ proxy)

as far as I understand What you/some are suggesting :

  • if the user want to use such display method he will be forced to offload every objects into external files.

what I’m suggesting :

  • let user decide if he want to offload OR keep everything internally stored (loaded in VRAM only on final render )

packing everything in the .blend is clean and extremely handy for a lot of users.
again, i’m not suggesting that offloading is bad, i’m saying that user should have the choice if he want to offload or not when using a viewport display method.

Imagine forcing users to create external files for each object he’d like to display as a bounding box in your scene? It doesn’t make any sense to force a data-storage method with an OpenGL display solution right? that’s exactly what I’m trying to explain from the last 5 posts or more : there should be a distinction between the display method of the object and the offloading by itself. Forcing a display method to work with a data-storage solution isn’t a flexible idea. That doesn’t mean that we souldn’t have such option, in fact we do, it’s an excellent feature! No that simply mean that the offloading need to be an optional choice.

Is it not clear enough? :sweat_smile:

An specific option to have a pre-built BVH could be a super good idea, specially because the biggest slow down of pre-processing now comes from instances, it will be sped up for sure with the new Point Cloud implementation that @brecht is working on, but probably it will never be as fast as not having to build the BVH itself.

@BD3D to be honest about using external files, we are a small shop ( you know it :wink: ) and we miss that workflow a lot, it is widely used in Archviz in other platforms, not just in VFX and not just by companies, also by freelances, because it helps to maintain projects organised and reusable, and maintain main files low in size, and render pre-process fast, so I think that this improvement is something that may benefit everyone, from the smallest solo-person to the biggest studio out there doing hardcore VFX

I agree there is no need to enforce the creation of an external file, so far a user can use collection instance or linked collection instance, but when you are in the phase to optimise a super heavy scene, having external files with all the possible accelerations included is very helpful, and you know, for a video every second equals spent money, it’s not so important for static renders, but for animation, this is life! :slight_smile:

3 Likes

I don’t know if prebuilding the BVH is practical anymore. Nowadays with Embree, OptiX, and AMD RDNA2, those APIs provide no support for it, and the BVH structure depends on the specific CPU and GPU architecture.

I don’t know if other renderers that adopted Embree still provide support for it, but I would not expect it. Maybe if they have customized Embree. For GPU hardware raytracing it’s basically out of our hands entirely. BVH building is also significantly faster than it used to be, definitely on the GPU.

3 Likes

Sounds to me like a cool thing to be able to do would be to basically freely split apart every single kind of data stored in a blend file automatically.
I’m thinking an implementation that’s similar (but no doubt more complex) to how simulation caches work: You can click a button for each kind of data, each individual pieces of data, or just every data in the entire collection or scene, and it’d generate a folder structure that automatically converts the current blend file into a file that only links stuff, and all the stuff is separately stored in individual files.
This is gonna be more interesting once the asset manager is in place, I think.

That can be done with an addon, it’s not hard as long as you have a clear naming convention in your pipeline and you know what do you want to do exactly.

The piece we miss is to be able to have things outside of Blender to avoid as much memory usage as possible when working in the scene, and to avoid as much pre-process and memory usage at render time.

If the pre-built BVH does not make sense anymore, then it’s out of question, the thing would be to understand why “proxies” in other engines accelerate so much the preprocess of the scene.

1 Like

Is it not clear enough? :sweat_smile:

I think you still are mixing up two things. One is the way how things can look inside the viewport, and the other is how to deal with massive (amounts of) objects in a scene without blowing it up.

Like @Funnybob tried to explain, using external files for large scenes is a basic workflow thing for all other 3d applications out there. And it’s not just for movie production.
I’ve done my fair share of TV commercials which had this exact workflow for all the files needed in the scene. (Remember that multi application crossovers are very much a thing in those fields.)
Or used ‘pre-render’ files like Arnold’s .ass format, Vray’'s Proxies etc.
Light on the viewport, and all data external from the scene file for rendering.

Also. those TV post production teams were often not that big. It’s just a very convenient way of working, especially when there’s a lot of iterations on (external) models, water/destruction simulations, smoke/fire .etc.
You want to link that in, not import/append it inside a blend file. It becomes a pain to swap in new stuff. Now it’s just relocating the new iteration.

The Blender workflow already facilitates this more or less with linked files on multiple data levels, so people understand the concept. Why is external Alembics, vdb’s etc. all of a sudden a issue? :wink:

And yes, there is still a choice, if you want. In other applications you still have the option to just import all the data, and use one big file. Nothing is being forced on you.

I love your Scatter addon, and how you dealt with certain things, but it’s all still in-scene making file sizes large. Blender just needs some small extras to deal with tons of geo on screen.

You pointed out that Blender is mostly used by the freelancer, and that is true for a large part of the user group, creating mostly stills for a variety of industries.
But when you start using Blender in a somewhat larger capacity, things @Funnybob is talking about are becoming something of a issue.

My two cents here :wink:

6 Likes

Just wanted to point out that that’s not the case, at least anymore, there are several medium / large studios using Blender in production, I will mention just one for different reasons, but this studio is producing a tv series with Blender, and one of the first problems they asked me about is the way Alembic files are handled because their files were getting big for no reason at all, and the scene experienced a big slow down - https://www.b-waterstudios.com/

Witouth any numbers or statistics we are talking to the wind here. Anyway that wasn’t the point of the argument, the point was that a new handy display feature should be flexible enough and should not be exclusive to X or Y users data-storage workflow because Data-storage != Viewport-Display

@RobWu you seem to completely miss my point… a) in fact I’m not the one mixing up the two concept, quite the complete opposite b) offloading is great, and it needs to be implemented, so I’m not sure why you feel the need to defend such concept, as there were no arguments against it

Totally agree with that, one thing is the visibility of the objects, and a different things it the possible optimisation for both, render and viewport, using external files, those are two separated points.

We should be able to see everything as point cloud, no matter if it is a cache or of it’s an actual object, and we should also be able to see an alembic file or an external file as Textured.

Blender is like a car, full of options, some would have the car because it “can” go at say 300 km/h, it doesn’t mean everyone will go at that speed, some are interested in it’s infotainment system, others in how much it consume, some just like how it looks.

Some users requested the car company to add “external” cargo capacity, cause the trunk is not enough, other users don’t like the idea of being “forced” to have that extra “ugliness” on top of their car roof, the company decided to add it to the list of “options”, and everyone was happy.

I guess this sums up the whole discussion here. (save for the company’s decision).

1 Like

BEHOLD THE MIGHTY PROXY TOOL 2.0!

The author gave me permission to show you his beta version. You can keep the geo in your file or offload it. So everybody’s happy. There are a few issues like I mentioned before but for me it’s a game changer, even at this point. Try it with super heavy geometry, like trees. Really cool!

He does need some help and it would be awesome if one if you check it out:

You can get the addon there too.

10 Likes

Is there any new development on this one?
Just asking, as the BA thread is quite dead atm…

rob