2024-08-16 Variants Design Workshop

Variants Design Workshop

Attendees/involved:

  • Dalai Felinto
  • Francesco Siddi
  • Pablo Vazquez
  • Bastien Montagne (Technical feedback)
  • Sergey Sharybin (Technical feedback)
  • Jesse Yurkovich (USD Q&A)

This proto-design document is the outcome of design sessions that happened in August 2024 at the Blender HQ. It is a follow-up of the Online Assets design discussions.

The goal is to look at variants as a core feature of Blender, and not something exclusive to assets. It also tries to align with the USD concept of variants. For a recap variants and representations on the context of assets, see the online assets design.

Audience:

  • Authoring: Technical users
  • Using: All users

Use-cases:

  • A cabinet furniture has two variants: open and closed doors.
  • A desk furniture has three variants: wood, white and black.
  • Plant vase variation : Broken and Full.
  • Different versions of a character are created with minor difference among them, and shared as a single character (with variants to be picked from).

Definition:
A variant is a set of dynamic overrides combined to offer alternative views of a data-block. Each data-block can have multiple variants.

[Collection data-block]
    [ variant open red door ]
       [ Material/Paint/Principle BSDF/Color = (1.0, 0.0, 0.0) ]
       [ Object/Door/CustomProperties/Open = True ]
    [ variant closed blue door ]
       [ Material/Paint/Principle BSDF/Color = (0.0, 0.0, 1.0) ]
    [ variant open green door ]
       [ Object/Door/CustomProperties/Open = True ]

Although the dependent data-blocks (e.g., Material) can have their own variants, they don’t have to. Their values can be changed for the evaluation context of this particular object.

Library Overrides, Dynamic Overrides, Variants

  • Library Overrides are for animating a linked character. Multiple instances of the character can be animated on the same scene.
  • Dynamic Overrides are for changes in a local file (which can be carried over to future links).
  • Variants use dynamic overrides to create different views of a data-block.

Authoring variants

Note: This is an exploratory design, to explore a workflow as distant from the library overrides as possible.

Technical workflow = Outliner-centric:

  • Clicks on a data-block and Add a Variant.
  • Go to a new view in the Outliner to edit the variants (VERY similar to how we show overrides properties at the moment).
  • Go to a property in the Properties Editor and “Copy Full Data Path”
  • Paste the path into the outliner variant editor.

Using:

  • Outliner lists all the variants nested under the data-block.
  • Outliner indicates that the data-block has variants.
  • When selecting an ID, the variants for that ID can be selected as well.
    • For simple cases (where there is a single VariantSet), each individual variant could be available in the ID Templates.
    • Variants are selected after an asset is selected.
    • If the individual variants are not exposed directly, a “default” variant could be set as the one picked by the ID Template (to avoid having to pick an original high poly mesh before switching it to a proxy).

Which data-blocks to support

Although the implementation is going to be generic, not all data-block types would benefit from variants.

Example of ones that could use variants:

  • Collection, object, mesh, material, image.

Example of the ones which don’t make sense:

  • Light probe, lattice.

USD

USD has a concept of variants which should be interchangeable with Blender’s variants implementation.

  • Each prim (e.g., an Object data-block) can have VariantSets.
  • Each VariantSet has one or more Variants.
  • Each Variant contains a name and a series of opinions (overrides) over specific settings.

Any user of the prim must choose one of its variants to use, for each of its VariantSet.

Representation

From the USD point-of-view, representations are variants. There is an effort from Ubisoft since 2022 to propose representations (LOD) to become part of the USD schema, but this is still under discussion (July 2024).

From the Blender point-of-view, we can implement representation as a user-facing feature, with a very constrained experience, and under the hood handle it like we would do variants.

This could then be used by the Simplify panel to restrict the maximum image resolution, or the LOD.

Image Representations

For example, we can implement representation for Image data-block where users can pick a different file path for different resolutions (1K, 2K, 4K, custom).

Level of Details

Note: There are different ways to design LOD integration. This is simply an example of treating representation as a Blender-specific feature, instead of a generic for all data-blocks solutions.

  • Objects could get a LOD panel where a different mesh and different modifier stack could be used as different LOD.
  • Each LOD should be configured based on the camera distance (defined on the scene level, overridden by object settings).
  • This could then be integrated with Cycles/EEVEE/Workbench.

USD Q&A

(answers by Jesse Yurkovich)

  • Can you have different variant sets enabled for a single prim?

For example, if I have the prim Desk and the following variant sets:

  • Color: Red, Blue
  • Size: Small, Large

Can I have a table which is Red and Small?

– Yes

  • Is the original prim also considered a variant?

– No, but you can always create a “default” variant on each VariantSet and set has no overrides (this works for Blender, but for USD this would be saved a bit differently).

  • Can the variant be animated? (as in, changing which variant to load in a different frame?)

– LIkely not. There’s a portion of the Ubisoft LOD spec that alludes to this being something they want their LOD proposal to address.

Technical feedback

  • USD feature parity vs usability improvements to artists.

    • If we get to a point where we need to make a decision between easy-of-use and feature parity, it is okay to focus on the usability of the Blender workflows.

    • For example, we could support a single VariantSet (and no nesting), making it 100% compatible for exporting, and requiring some conversion for importing (e.g., by flattening the possible combinations as individual Variants).

  • Once a collection is linked, where are the active variants stored?

    • Could be stored in the LayerCollection.
    • If it is an instance, they could be stored in the Empty object.
  • If every ID selector can pick the variant, how to handle cases where you just want the variant used in the current context (e.g., Bool Modifier or Light Linking).

  • IDUser (like ImageUser) may need to be required in any place that stores an ID pointer.

  • Partial library loading is required (so a high poly variant is not brought into memory/depsgraph if not used).

    • This is also important for asset representation, where only a few representations may be available (downloaded).

Next Steps

Polish the design (mockups, …) and see how this would fit as a project, the MVP(s), real studio use cases to consider as deliverables.

26 Likes

In 2021 I proposed a similar idea. Some of the points could be design references, maybe.
Variation System - My proposal based on current asset browser - Archive - Developer Forum (blender.org)

I have a question, probably a little borderline with respect to the concept of variants. Could these variations be changed at the ViewLayer level? Examples: I render the same table with different variations in the material, where each layer has its dynamic override selected, or again, a data block scene with variations (a view layer with motion blur and another without motion blur).

Thank you for your work and your attention.

1 Like

In 2D motion graphics, we use something like variants all the time. Perhaps this can be an inspiration?

As an animator, I often design 2D assets and prepare them for variants. That means I mark the bits that can change and prepare options. The end user of the asset (can be me as well) can pick options using a UI. In 2D animation, variants consist of colors, shapes, animations, visible text and more. Controls for the variant option pickers can be a color picker, a Boolean or a number. Scripting often responds to these options to show/hide parts of the asset or do other things, like change a font size. Novice users can drag and drop a packaged asset and change options.

Feel free to contact me if you want to see a more detailed practical example of how this works in real 2D productions to save time and avoid duplicates.

1 Like

Unity’s Prefab system may have some good inspiration for Variants - I’ve always thought of them as classes in the form of 2D/3D assets, but there are obvious differences so it is not a 1:1 analogy. But they are immensely useful in real productions.

1 Like

Will this project allow for something like texture mipmapping?
We’re really struggling with this at work, not being able to choose texture resolution based on camera distance (with both linked and local assets) force us to do a lot of workarounds, or separate everything in more render layers than necessary just to be able to fit the whole scene into the V-RAM, increasing render times significantly.

14 Likes

I wish LOD system is considered, and that it’s possible to switch between which Blender file to link from during editing and rendering.

We struggled working with a large scene that had several industrial models linked from external files, most of which were nearing 1GB in size. This was also loaded from a network drive.

Even if we had optimized the models for the viewport, opening a scene took a very long and unproductive time since the blend files needed to load all in full. It would have made a world of difference, if we could’ve designated different files for viewport (work) and render (farm).

9 Likes

Having both variants and representations feels like the right direction to head in. You might run into a bit of friction on the USD side of things for attribute composition in variants vs representations, but you could probably wag the dog a bit if you’re able to send someone to the wg-usd-games meetings on a regular basis. The LOD Schema proposal is probably the best place to set the foundations for a representation axis, and it’s at the top of the agenda for the meeting tomorrow.

Overall, I’d say my largest concern is actually on the UI end — adding more features to the existing datablock widgets could go wrong in half a dozen ways, and I think you’re in territory where it’s worth considering some new UI patterns. You’ll end up dealing with sub-graph encapsulation and elision in multiple display contexts, and the existing tree-view widgets don’t feel like they’ll be up to the task.

3 Likes

Is there a way to synchronize keyframes for the modifier in Blender? It seems that the associated animation data and object keyframes cannot coexist, and using a driver for synchronization feels like overkill

While that is no doubt something which is very much needed (especially by studios working on larger assets/productions), on the surface it sounds like a bad way of doing what needs to be built into Cycles (much like most other render engines already have) and that is MIP Mapping.

Combine that with a LoD system for the mesh at different ‘resolutions’ and modifier stacks (or selected active/evaluated modifiers at user defined LoD levels) and you would then have a truly full featured LoD system within Blender.

Something that is currently and seriously lacking.

14 Likes

in that same vein
Especially for Eevee those Mipmaps make sense.

Having the ability to pick a different file path for different resolutions sounds intriguing if it results in one image with an included mipmap.
Another quality of life feature is being able to import/export a Mipmap Atlas.

For ease of use the user usually imports one image and then the mipmap is generated containing the smaller sizes.

Two Sampling methods which would be great in Blender
Nearest Mipmap anisotropic as well as linear.

mipmap anisotropic

linear mipmap

9 Likes

1000% agree! There was a time when we consider the migration to blender at our studio, so we tried to replicate a set from maya in blender (a small one, really), and just the fact that Blender crashed every single time when loading the materials with 4K textures was a showstopper. And it was a set that Maya/Arnold handled with ease, rendered in a minute.

The change in workflow from maya just in that aspect would have meant an unjustifiable amount of work. So much so that we just couldn’t afford it. It wasn’t the only reason, but it was one of them.

12 Likes

My understanding is that this project is only about figuring out which kind of data variants are needed.

For mipmaps, it is more common to have one texture with multiple resolutions, at least in game engines. As far as I can see, this kind of use case was not explicitly considered.

This conversation about mipmaps seems a little tacked on, do we really want performance features to rely on “asset variants”? I thought variants were meant to be mostly about the art. LODs have little relevance in the context of offline rendering (screen space subdivision seems like the closest relative, and it is automated), and mipmaps should be automated as well, hidden from the user.

6 Likes

Both LODs and Image Representations were both mentioned and are not primarily about the art.

To me, it also feels like a somewhat unfortunate mix, but it appears to make sense on a technical level to consider both.

For any largish outside scene (that one wants to render, be it Cycles or Eevee) I would have thought that LoDs are totally relevant. Think grass, trees, buildings, a large crowd of characters, etc, the more away from the camera they are, the more simple they can get, both from a mesh point of view and image textures.

Since the design document lists both of these design requirements, then we logically end up with mesh ‘variants’ being both different mesh data (a broken or full plant vase is not going to be the same mesh or mesh detail) and hence no reason why it couldn’t also be the same broken vase at different LoD levels.

While if you are going to apply changing the image from the same desk mesh with wood or a black veneer at different resolutions, then why wouldn’t you also want to have the same texture but just change the resolution as you change the LoD’s. At which point one may as well do MIP mapping.

Since at the end of the day, if Blender only ‘half’ does it as a ‘variants system’ then a whole bunch of people are going to try and use it as a LoD and MIP mapping system and wonder why it’s all only a ‘half-backed’ implementation.

5 Likes

With public available knowledge, the following offline renderers make usage of mipmaps;
Vray
Arnold
Renderman
Corona

EDIT;
upon longer research it varies wildly on the availability and exposed mip map features to users of other offline renderers.
For sure it goes against the copyright guidelines to write their UI decisions/implementations in this forum down in detail.

It might be best to think up how to best amalgamate a big chunk of useful possibilities of those variants with keeping disk and network I/O, time and memory efficiency and usage concerns in mind.

On thinking about it a bit more, I think that performance and rendering needs to be a primary consideration.

Lets take one of the use cases: Plant vase variation : Broken and Full

Both Broken and Full need to be different mesh data and it stands to reason that image textures would also be different.

So lets say you have a 60 frame animation where the first 30 frames have the Full Vase and the last 30 frames have the Broken Vase.

Logically, there’s two ways to render that, load all the mesh data and image textures into VRAM and only use each set of data for the 30 frames that the camera sees it.

Or, for the first 30 frames, only load the Full Vase mesh/image into VRAM and then for the second 30 frames, unload the Full Vase data and load in the Broken Vase data.

It may not seem like much for just a single object, but what if your scene has 10 objects, with 3 ‘variants’ of mesh data and 4K image textures. All of a sudden, if ALL data gets loaded into VRAM for every frame (even if it’s not needed for that frame) then there is a good chance you run out of VRAM and can’t render at all.

At which point, the whole system largely becomes unusable and mostly ends up as a useless feature for anything but small/limited scenes.

1 Like

As Hadriscus pointed out, LODs are not that useful for offline rendering. Especially in huge environments.
Usually, you will scatter and instance your assets over your whole environment so you only deal with a limited amount of unique assets versus having thousands of individual assets. Let’s take a tree for example, this tree will have to be loaded into RAM only once but then can be used in 1 million instances from foreground to background in the highest quality. If you use LODs (highest quality in the foreground, lowest in the background) you have to load all LODs into memory. So your memory requirements will be substantially higher, the quality slightly lower, and it’s more work to set up.

That’s not to say LODs are without use for offline rendering, it can definitely be very useful! Let’s say you have your tree ONLY in the background and you’ll never need the high-quality version – then opting for a lower-quality LOD for the tree is the sensible thing to do.

Combining various LODs of the same assets into a scene makes sense for game engines, but it doesn’t make any sense for offline rendering. Having the ability to load in variants however is very very useful for a lot of things.

2 Likes

Sure, but as I pointed out in my last post, only the exact variant needs to be loaded in for any given frame, otherwise you hit all of the same increased memory requirements that you just pointed out with LoDs. For both the mesh data and any different image textures.

Hence, this whole design does need to very much consider memory and rendering performance. Which in the case of images, somewhat brings us back to mipmaps.

3 Likes