The fundamental problem is that no existing import method works well for asset libraries. They might work well for a few use-cases, but definitely not for all kinds of asset libraries we care about. Most blocking for us, this includes the essentials asset library that we integrate into Blender.
Existing Import Methods
Linking generally works well for the use-cases that it has been designed for, but there are problems when using it with third-party asset libraries or assets shipped with Blender.
- It makes .blend files dependent on the asset libraries to be available and requires them to be saved at specific file paths. This is fine for a bigger production where these things can be tightly controlled, but is bad for many users that want to create standalone files using assets. It shouldn’t be necessary for them to explicitly pack assets. They might not even be aware of the fact that they are using linked assets if they are deeply integrated with Blender or Add-ons.
- We don’t want people to link to assets that ship with Blender, because that makes it impossible for us to replace them in future versions without breaking people’s files.
Appending works well for adding a data-block that is supposed to be modified locally. However, for assets this is problematic because appending creates a copy of the data. Those copies are not locally modified in many cases though. The issue with creating these copies becomes apparent when appending the same asset more than once or when linking from different files which contain the same appended assets. In those cases we’ll end up with multiple identical data-blocks. This is annoying and confusing because it’s expected that the asset data-block only exists once.
Appending with reusing data was added to improve the standard append mechanism a little bit for simple use-cases. It generally avoids appending an asset a second time if it has been appended before. This usually works, but there are still problems:
- The previously mentioned case of duplicate data-blocks after linking from multiple files which have the same appended assets still exists.
- There is no mechanism that checks if the appended asset has been modified locally. This makes it difficult to know if a newly imported asset will be the original asset or the locally modified one.
Previous Work
While it’s generally agreed upon that these methods don’t work well for assets in all the cases that are important to us, there is no consensus on the solution yet.
The result after the initial discussion can be found in this task.
A previous proposal was discussed, but the additional overhead required even for simple local assets was too much (among other issues).
Another discussed potential solution is to not change the existing functionality much, but to just hide it more from the users. For example, Blender could try to detect duplicates and just not show them even though they are still there. This might make things less annoying for users but also has downsides. Users become more detached from the data-blocks in their scenes which could be a real surprise when they find out that there are actually many more data-blocks under the hood. Furthermore, there can also be a lot of unnecessary overhead just by detecting and processing duplicate data. Hiding some data-blocks from the user can make sense in some places, but is not a solution to the fundamental problem in my opinion.
Proposal
At a high level the idea is to have an automatically generated hash for each data-block which changes whenever the data-block or any of its dependencies changes. When importing an asset, it is embedded together with its dependencies. Contrary to what appending would do, the imported data-blocks remain in an uneditable state like linked data. In fact, internally they are still linked, but using a new kind of virtual library that uses the data-block hash instead of a file path as identifier.
The following sections explain the proposal in more detail.
Data-Block Hash
The main requirement for these hashes is that two data-blocks with the same hash can be used interchangeably by Blender. It’s not guaranteed that all data-blocks that could be used interchangeably will have the same hash though.
I’m using the term “interchangeable” instead of “identical” intentionally. Some data-blocks can be interchangeable even if they are not identical. For example, two node trees that are identical but where the nodes have different positions are interchangeable but not identical.
One has to differentiate between two kinds of data-block hashes:
- Shallow Hash: Hash of the data-block itself only, ignoring any other referenced data.
- Deep Hash: Hash of the data-block itself and all other referenced data.
In the end, we’ll always need the deep hash to actually find interchangeable data. Unfortunately, it’s not generally possible to save the deep hash in the asset file. That’s because the asset may link other data-blocks from other files. Changing these linked files has to automatically change the deep hash of the dependent assets, otherwise the main requirement can not be met reliably.
The shallow hash can be computed and stored in the asset files though. It seems like a good approach to always write the shallow hashes in the asset files, but to only compute the deep hash whenever importing an asset. It can be based on the individual shallow hashes.
While we could actually implement data-block hashing, the main requirement is also met if we just generate a random shallow hash whenever a data-block is changed. So I’d start with that for now.
Embedding and Virtual Library
Imported assets will be embedded while also having ID.lib
set to a new kind of Library
: a virtual library. It is virtual in the sense that it does not reference any file-path. Instead, its identifier is just the deep hash of the asset. Since the deep hash is different for an asset and its dependencies, a separate virtual library is created for each imported data-block.
Embedded data-blocks have features of appended and linked data-blocks:
- They are stored within the .blend file that uses them. So the original .blend file can be removed without breaking files.
- They are still part of a library reference which has a few consequences:
- They can’t be modified in a way that makes them non-interchangeable with the original data.
- Their name can stay as is and does not have to be made unique across all data-blocks.
If the user wants to edit the asset locally, it has to be made local. By doing that, it’s not automatically deduplicated anymore. Additionally, we could also support creating local overrides for embedded assets.
Discussion
This section contains some extended information that may be important to consider in the context of this proposal.
Versions and Variations
The core data-block management code does not have to know about variations of a data-block with this proposal, because the core problem that leads to duplicate data-blocks is solved at a lower level. Nevertheless, I briefly want to touch on that topic because it’s quite related anyway.
Changing to a different variation (or version) of a data-block comes down to first finding the different available variations and then just using the generic replace data-block operator. The unsolved problem here is how to find the set of available variations. Currently, I imagine that a combination two approaches would solve the majority of use-cases:
- A virtual library could still store the original file path and data-block name that it comes from. Then Blender can detect whether there is a new data-block variation at this place that the user might want to update to. This works out of the box without any extra work by the user.
- Asset authors can optionally assign globally unique data-block identifiers to their assets. Then Blender could scan all available assets to find the ones with the same identifier. Then the user can choose which one to change to. The variations could have a label and a version number. Those don’t affect low-level data-block management code, but just help the user or higher level scripts to decide which variation to switch to.
Appending Data-Blocks that use Assets
Generally, embedded assets should stay embedded data-blocks unless they are explicitly made local. So, by default Blender would just copy the used assets to the current .blend file, but keep their virtual library intact.
Linking Assets (without embedding)
Ideally, we’d only allow linking to asset libraries if they are part of the project that the user works on. This is important to keep projects self-contained. Unfortunately, while planned, Blender does not know what a “project” is yet. Until then, we probably need the ability to let the user choose between linking and embedding for each asset library.
Embedding should certainly be the default and is enforced for the essentials library.
Assets Referencing Other Assets
Within a growing asset ecosystem, it will become more and more common that assets use other assets from the same or another asset library (including the essentials asset library). This will probably be most common with node group assets.
Each asset library should be self-contained, so it should not link to assets outside of that asset library. It can embed assets from other libraries though. Each individual asset file does not have to be self-contained though, so it can link to assets within the same asset library.
For the asset author, an asset library is a project. The rules for projects in general seem to apply here too.