Asset Linking [Proposal]

Currently, assets are either linked, appended or use the “append (reuse data)” mode. All of these modes have problems which make them unfit for use with the growing asset system. This severely limits our ability to ship assets with Blender and also makes it the asset system annoying to use in non-trivial production settings.

This document proposes a new approach for importing assets which solves the problems, making the asset system much more useful overall. At a minimum, these problems have to be solved:

  • Assets can be embedded into .blend files while keeping them read-only.
  • The same asset should only exist at most once in each file, avoiding unnecessary asset duplication.

First, the proposal is presented and then some of the design choices and options are discussed.

Proposal

The proposal combines multiple small ideas which don’t help much on their own. However, taken together they seem to achieve the goal quite well.

Globally Unique Asset File Names

Files that contain assets have to follow a specific naming scheme: namespace.name:vX.Y.Z.blend (or similar). For example, our existing assets could be renamed like so:

assets/geometry_nodes/
  blender.procedural_hair_node_assets:v1.0.0.blend
  blender.smooth_by_angle:v1.0.0.blend

The directory path is not taken into account. An arbitrary number of assets can be in each file, but it can be benefitial to split-up assets into multiple files in many cases. Nested namespaces of the form ns1.ns2...name work too.

Immutable Asset Files

Once published, asset files should not be changed anymore. If changes are necessary, a new file with a new version number should be created.

Asset Path in Library Data-Block

Currently, Library.filepath can store either a relative or absolute path. This is extended to support asset paths of the form asset:namespace.name:vX.Y.Z.blend. Note that this path does not contain a directory path.

When loading such a library, Blender first has to find the corresponding .blend file on disk. It does so by searching for the file name in all asset library paths recursively. Obviously, the set of available asset files and their corresponding paths should be cached.

Embedded Linked Data-Blocks

Support storing individual linked data-blocks in the .blend that uses them. This allows sharing the file with someone that does not have all assets installed. Just like normal linked data-blocks, they are also not allowed to be edited if they are embedded.

Embedded linked assets with the same file name are all considered to be equal. This is possible because of the immutability constraint mentioned above. So in a situation where there are multiple instances of the asset, Blender can choose one arbitrarily. For example, in the situation below, there are actually three stored instances of the same material data block. When loading shot.blend, any of those can be used. If the original asset does not exist anymore, it should still work, because one of the embedded materials can be used.

image

Discussion

This section explains and justifies some of the design decisions in this proposal.

Asset Identifiers

Are globally unique file names realistic?

I think they are. This is quite similar to how addon or operator names must be unique already. Enforcing a specific file name structure can help with that by forcing the asset author to think about what a good namespace name would be. It also enforces the use of versioning.

Can we use unique data-block identifiers instead?

Technically yes, ensuring uniqueness on the data-block level can work, but it comes with additional challenges:

  • Asset authors have to come up with more unique identifiers if there are multiple data-blocks in an asset file.
  • It’s not enough to give data-blocks that are marked as assets unique identifiers, because they might use other data-blocks internally that are unique too, even if they are not an asset themselves. For example, a procedural object as an asset might indirectly use a mesh, a material and a node group data-block. Sometimes it’s even hard to find all the data blocks that would need unique names.
  • Deeper changes are necessary in Blender because right now the proposal assumes that asset deduplication happens just based on Library.filepath.

Can generated unique identifiers be used?

Technically yes. For example, one could generate new unique identifiers every time the asset file is saved, or a hash of the .blend file can be used as identifier.

However, I expect this to make authoring assets harder where one might repeatedly change the same asset file before it’s published. Creating a new identifier and maybe a new version every time seems unnecessary. Furthermore, generated identifiers are generally not human readible which makes it much harder to debug any potential issues.

Having the identifier and version clearly visible makes it much more obvious at a glance how the system works.

What should the namespace in the asset file name be?

Anything goes, but everyone has to try to find something that’s somewhat unique. The name of the author, or project or a combination of the two should work. Using some (semi-)random string works too.

How does the asset file name correspond to the asset catalog?

Both are completely independent. This is important because one might want to move the asset around between catalogs without breaking files that link the asset.

Can the directory path of an asset be part of the unique name?

Technically yes, but I think that would managing asset files harder, because one can’t move them around between folders for better organization anymore.

Can the identifier and version be stored in the .blend instead of in the file name?

Technically yes. We can store file level data in .blend files (tried here). However, there are also some downsides:

  • It’s much less obvious what file an asset belongs to. One couldn’t e.g. search for the linked asset file name on disk to see if it exists anywhere.
  • Makes it more likely for version numbers in the file name and file contents to go out of sync, leading to confusing behavior.
  • Might require additional data to be stored in Library besides the filepath to identify the asset file.

Not sure how much people value being able to choose custom file names in their asset libraries. I currently don’t see much of a benefit in separating the file name from the file identifier, but either way should work in this proposal.

Data-Block Management

Can embedded linked data-blocks go out of sync with the original?

Technically yes, within limitations. This can happen because the embedded data block might have versioning applied already, that the original has not. Furthermore, Blender does allow some kinds of edits to linked data-blocks like changing the selected node in a node group. Even with these possible deviations, it still seems ok to consider them to be the same data-block.

Technically, the original asset file could also be changed after it has been embedded, but that conflicts with the immutability requirement and should practically never be done if the asset has been published already.

What is the default behavior for adding assets?

For assets shipped with Blender, we should always link and embed them. This makes sure that files will still work in future versions of Blender, when we ship a potentially different set of assets.

For other assets, just linking them is probably fine as default. These are assets that the user installed, so it’s more obvious that .blend may depend on them. There should still be an option to embed all linked data-blocks.

Why support embedding linked data-blocks if we have packed libraries already?

Embedded data-blocks can be more performant and memory-efficient.

  • They can be loaded like other local data-blocks and don’t need the overhead of parsing another library.
  • Packed libraries embed the entire library .blend file which may contain unused data-blocks and their own SDNA.

How does this relate to the “Append (Reuse Data)” mode?

This mode was troublesome from the start, because it can’t actually detect if data can be reliably reused or if it was changed already. It can probably be removed.

Note that when importing certain assets, we might still want a combination of append and asset-linking. For example, when adding a mesh object, we may want the object to be appended and the mesh and material to be asset-linked.

How to edit imported assets locally?

Asset-linked data-blocks are generally not editable because they are linked. That’s true even if they are embedded. Like existing linked data blocks, assets can be made local and then they are editable. It may potentially be useful to keep a reference to the original asset that it came from.

What happens a data-block that uses assets is appended?

There are different possible solutions:

  • Appending a data-block also appends all indirectly referenced data-blocks. This can result in duplicate data-blocks, because appended data is not asset-linked anymore, so it can’t be deduplicated.
  • Indirectly asset-linked data blocks remain asset-linked. This seems more useful overall. It may be good to embed the assets by default in this case because users expected appended data-blocks to work without references to other files.

Personally, I’d try to keep assets linked for as long as possible until the user explicitly makes them local.

Misc

What about existing assets?

We could allow the proposed asset linking for existing asset files, but I think that would cause problems because there was no uniqueness requirement before, which is essential. Old assets can just use the existing link and append features. Old assets can be detected because they generally don’t follow the proposed file name structure. If they already do, great.

Can this be integrated with the extensions platform?

Yes! I could imagine people being able to share files which use assets that are available on the extensions platform. Since the asset file names have unique names, Blender could potentially download the required assets automatically (given user consent of course). The extension platform can also store older versions of assets.

Can we prevent users from changing asset files after they are published?

Not really. There will always be ways to overwrite or replace an asset file without changing the name, even after it has been published. There might even be valid reasons to do that in rare cases when the impact is known. We could introduce some safeguards though. For example, we can make some asset files read-only (e.g. the ones we ship with Blender). Furthermore, when trying to overwrite an asset file from within Blender, there could be a dialog that asks whether a new version should be created instead.

Can assets be updated when new versions are available?

Given that the version number is part of the file name, Blender should be able to detect when there is a new version of some asset file. The update probably still shouldn’t happen automatically in general, but the user can be prompted.

Summary

This document proposes a new approach for importing assets that solves existing problems in a way that seems quite feasible to implement. What do you think?

23 Likes

Generally, I like this proposal. The thing I’m a bit concered by is enforcing a versioning pattern in the filename.

What if, instead of:

we say:

Embedded linked assets are different if their names are different and their content (hashes) are different.

So from a user perspective, working on an asset then saving over the same file will not register as a new version, because the names are the same. If only the name is changed, for example by appending a _v2 the asset is stil considered the same, because the content matches. Only once the user changes the file somehow, then saves again, will it register as a new asset.

Internally in Library.filepath we can store asset:namespace.name:hash.

I think this will remove the hard requirement of a version string in the filename, while still making it easy to handle as an asset creator or user.

5 Likes

With a little change, may there be nested namespaces?

Something like this

asset:ns1.ns2.….nsN.identifier:vX.Y.Z.blend

between the : , there is a list of names separated by dots.

The list has at least one entry. The last entry of the list is always defined as the identifier.
Its predecessors together shape the nested namespace.

It would help if you could describe what kinds of problems you foresee by using the filename.

You will have to define what you mean by “content hash” more exactly. Is it the bytes of the .blend file or some kind of hash that doesn’t change if you just open and close a file (which does change the bytes). The former introduces a big new concept, hashing a file in this way is not easy. The latter is simpler but probably semantically equivalent to just always generating a random number when a file is saved. Hashing the bytes of a file and then writing the hash into the file is a little bit tricky too because that changes the file.

This can work, it just doesn’t feel like it’s worth the complexity + now you still tie the asset identity to the file name in some way which forces you to change the file name for a new version. For me personally it’s just harder to reason about the system when a hash is involved, so that needs to be justified more (not saying it’s impossible to justify).

I might be wrong, but I think you replace version numbers with hashes here. That can work generally, but it seems like we’ll still need an additional system with version numbers to be able to have tooling for updating assets to new versions.

Yeah, I had that in mind already too. Makes sense. I changed the proposal a bit to include that more explicitly.

2 Likes

I don’t really know a lot about this topic, just added that names (as part of file path) can have length limitation on some systems.

I’ll start with seeing a folder filled with a few dozen files, all starting with Blender. Should be fun to sort that by alpha…

Can it not instead be some sort of metadata? IE, as photo or video files contain various info on particulars.

ETA:

To add to the above: at the root of the matter, I really have no issue with the name of a node group being the comparison point. In my work, this would be fine. What I would not want is to have to follow some long complicated string of information guidelines, in order for it to actually work.

If I call a node group MyGroup, that’s what I want to call it and that’s what I want it to be compared with when I append something else.

If other asset creators want to have version strings and dates and some other sort of suffix going on that’s perfectly fine, but shouldn’t be required for the system to just do its thing.

What are the issues that you’re wanting to fix here? The problem space isn’t defined so it’s not easy to feedback on the proposal

2 Likes

Have a look at the blog post and the task linked above.

If that’s still not clear enough (it’s somewhat hard to describe the problem in simple words), I can try to explain it again here, when I’m back home.

1 Like

I’m not sure if this is relevant, but on Windows, I had some Linux open source apps doing raw writes to the hard disk or something, and I ended up with a very long path of nested folders and long file names (that was on the beginning of the Windows XP era) this meant that I could not delete or rename or access the file from the GUI…
so in blender case if you have the assets in nested folders the length of the path could make the limit to the “file name length” shorter than the maximum as will…
however, this could be a theoretical limitation in practice.

1 Like

Would the current override system in Blender still work exactly the same with this newly proposed asset linking?

Meaning if I link in an " immutable" data block, I can still apply override on top of it?

Following to this another question:

Let’s think about this workflow:
Modeling artist creates a “CharA” collection with objects in it.
Shading artists links in “CharA” collection, applies overrides that add/edit some shading.
Layout artist links in “CharA” collection from shading, applies mesh sequence cache modififer as override.
Lighting artists links in “CharA” collection from layout.

In my mind the “original” datablock in this scenario ist from the modeling artist. It just gets linked to other steps and gets more overrides along the way.

How would the new system be able to detect that in fact the asset is different (through overrides) if some user links in the CharA collection from multiple different department steps.

Is it possible to link in an asset datablock, apply overrides, and create a new asset datablock from it that still contains the linked reference to the other?

There will always be ways to overwrite or replace an asset file without changing the name, even after it has been published. There might even be valid reasons to do that in rare cases when the impact is known.

I thinks in a pipeline, especially with smaller teams, having one asset file that gets overwritten on each publish can be very common. It can save a lot of overhead of going in each file that links this asset and manually updating / relocating it to the latest version. Especially in nested link scenarios.

So if this “immutable” feature will be a fixed requirement we definetly need a way to tag an asset to always stay on the latest version. But i think this opens up many other questions like how does blender know where all versions of this asset are stored etc.

1 Like

Yes, it’s still normal linking after all, just that instead of storing a filepath for the linked library, just the asset file name is stored.

I think for assets that you create and use within the same production and team, you’re probably better off just using normal linking like before. This is not going away.

Here I’m more concerned about the case when the people creating the assets are independent from the people using the assets. Imagine you use some third party assets or even just assets bundled with Blender in your production. I think in this case it would be quite bad if they would suddenly change without any indication of a new version.

That said, I think it’s still possible to have an automated update mechanism if that’s desired, as mentioned at the bottom of the original post.

2 Likes

Here is an example that shows that the current “Append (Reuse Data)” import mechanism for assets is not working. The image shows a subset of the data-blocks in the demo file from the Charge project. Note how there are many duplicated data blocks which internally are just all the same.

image

This graph shows how this can happen. Since “Append (Reuse Data)” is just appending with some extra logic when the same data block is added to the same file again, both character files end up with a copy of the material. Both copies can be edited independently and Blender does not prevent you from editing those. When both characters are linked into shot.blend you end up with two versions of the material which are usually the same but Blender can’t be sure of that.

image

Just using normal linking instead of “Append (Reuse Data)” would have solved that. I think they are doing that in new productions more again. Using linking works fine for assets created as part of the production. However, we don’t want people to just link to assets bundled with Blender. That’s because it makes it much harder to improve Blender and the assets it comes with in the future if we want to maintain backwards compatibility (which we want!). This is why the embedding of linked data blocks is important.

Even if people would want to link to assets bundled with Blender nowadays, that would result in files that break very easily because one would have to hard-code the asset path to the Blender installation directory. This is why it’s good to be able to reference assets by their name directly, instead of by absolute or relative path to the .blend file. It makes the .blend files much more resilient to changes of the way assets are stored in asset libraries and makes it possible to share the files between people that have the asset library stored in different places.

3 Likes

The problem with current Asset system is that it’s:

  1. Extremely buggy due to unnecessary complexity
  2. Takes lots of mental energy to use

The proposal feels like it’s trying to fix asset browser by adding even more complexity and further increasing the mental energy needed to create and use assets.

There are many examples of very well functioning asset systems which are much less complex, much more reliable and take much less mental energy to interact with. Especially in the world of game engines.

First, I think we need to stop talking about asset deduplication. Asset duplication is a symptom of a failure of current Asset system. We should not be figuring out complex systems to fix the problem after it happens, but instead avoid the problem before it happens. In other words, Blender’s asset system should not be duplicating Assets as haphazardly in the first place, as it does now, so that we don’t need to start inventing the tools to clean them up.

One of the largest issues Asset system currently suffers from is that it keeps some under the hood reference to source library assets, which are very fragile, and lead to bugs like this one:

So asset duplication is almost always guaranteed at some point.

The idea of 3 simple modes is quite sound:

  • Link
  • Append
  • Append & Reuse
  1. Link
  • I want the asset to be read only
  • I want the asset to be always updated to latest version when somebody updates the library file
  • I don’t want to edit the asset locally
  • I am willing to take the risk that the scene can lose data if the source library file goes missing (there won’t be a local copy in the scene blend file)
  1. Append
  • I want the asset to be copied into my scene file, so the scene file is fully functional without library file being present
  • If there’s already asset or any of the datablocks it uses under same name in my scene, I expect the data to be duplicated (number padded copies)
  • I have option to edit the asset locally without affecting anything outside the file
  • I understand the asset won’t be kept in sync with latest version in the library
  1. Append & Reuse
  • I want the asset to be copied into my scene file, so the scene file is fully functional without library file being present
  • No duplicated datablocks (number padded copies) can ever be created when importing assets from library into scene in this mode
  • I have option to edit the asset locally without affecting anything outside the file
  • I understand the asset won’t be kept in sync with latest version in the library

This is how it should work in theory. In practice it unfortunately doesn’t. The Append & Reuse mode internal references will always sooner or later break and the mode will start duplicating the assets.

Here’s how I’d fix it:

  1. For Append & Reuse, always use the raw, user editable and user visible datablock names, not some internal reference, and introduce confirmation dialog.

It is possible for asset imported using Append & Reuse to have datablocks of conflicting names with some datablocks already present in my scene. In this case, I don’t want some opaque unreliable weak reference to determine what to do.

I want a popup dialog telling me that there are already some datablocks with existing names in my scene, and I want to see the list of their names. Then I can manually decide whether I want to:
A: Import this asset, and use these datablocks (which could be destructive to the asset if these aren’t the right ones)
B: Import without Reuse and fall back to duplication (number padded copies of conflicted names will be created)
C: Cancel. Before the import operation is just done, I want to be able to stop it if I see there could be some conflict, before it happens, rename the conflicting name datablocks in my scene and try again.
D: Ability to remember the last choice and automatically perform it next time.

  1. Don’t have a choice between using imported asset and importing a new from library in add menus.
    image
    Once asset of a given name has been imported from the Library, it should show up only once in the add and search menus. It should show only the local copy or the linked datablock, so there’s no risk of reimporting the same asset twice unintentionally.

  2. Add manual “Update” operator to the Asset Browser asset context menu.

This operator would be intended for manually update assets imported using Append or Append & Reuse to the latest version from the Asset library. This can once again only reliably work with a confirmation dialog.

This dialog would list all the matching datablock names between the source asset and target scene, and give user 3 options:
A: Overwrite all the matching datablocks with the ones from the asset library.
B: Make number padded copies of all the conflicting name datablocks, and overwrite only the “root” asset. (For example if I have Rock Generator GN node group which uses Remesh node group inside, the Rock Generator would be overriden by the newer version from Asset library but Remesh would be duplicated to Remesh.001)
C : Cancel


There may be more ways to solve this than the one I am proposing, but the more general issue is that the Asset Manager is currently too unreliable and complex that people use it a lot less than they should. I just can’t imagine users would start to use it more if the complexity increased even more.


This is really just caused by the design flaw of not using the raw datablock names but some internal weak references to match the assets, as I wrote above. I think first we should just try to fix that, before turning entire Asset system upside down.

11 Likes

I don’t really have time to get into this until after next week, but some quick feedback.

I think this is the wrong approach. Linking introduces a bunch of mental overhead where you have to be careful about how you set things up, how you replicate it across computers, how you update things when you reorganize and rename files, etc. Basic append & reuse should not require that overhead.

Globally unique asset file names only seem meaningful if a users is just as careful as they would be when using actual linking. And immutable asset files seem impractical when you have a blend file with dozens of assets and you need to save a new version every time you edit just one of them.

4 Likes

Looks like you indeed found a bug there that should be fixed (didn’t check it in more detail yet). However, even if that part would work perfectly, it would still not solve the case with duplicated data-blocks after an additional linking step. See my previous post.

I don’t intend to do any development work (other than thinking about the design) on this topic until you and Bastien are back, so just take your time. :slight_smile:
Will be interesting to hear if you have another approach in mind then. Every other solution I considered so far (except for possible alternatives mentioned in the first post) had significant issues when I tried to apply them to different situations.

1 Like

I’m not sure what the ideal solution. But what I remember discussing with Dalai, Bastien, Julian before is something like this.

Compare datablock contents to see if they are functionally equivalent. This would require either implementing a custom comparison function for all datablocks (ugh), or a generic iterator over all data. The iterator could be based on RNA, with properties having a flag to be ignored for this purpose. The override system already does something like this to detect changes. This could miss some things, or give false positives, which could be solved somehow or decided to not be important to check depending on how likely each case is.

Another approach would be to have a UUID that is updated whenever functional changes are made to the datablock. This requires improving the DEG_id_tag_update mechanism, to make it not just used for the depsgraph, but tracking edits in general. Flags would need to be chosen to ensure e.g. selection does not cause an update. Downside is that it can’t detect equivalent setups created independently.

1 Like

I’ve re-read it now but not sure I still understand it. When appending (not linking), shouldn’t material.blend point directly to the shot.blend instead of to the character 1 or 2 blend?

When you Append & Reuse from character1.blend, and then proceed to Append & Reuse from character2.blend, you are telling Blender “I want to reuse datablocks already present in my scene”. So if let’s say character1.blend had “shirt” material linked from “shot.blend”, then when appending from character2.blend, Blender should be like “Oh hey, there’s already material named “shirt” here, let’s use that”.

The whole criteria of Append & Reuse should be matching datablock name and matching datablock type. Nothing more complex than that.

That would of course cause big problems if the two shirt materials were supposed to be different. That’s why I mentioned that some sort of dialog is crucial. Dialog, which would show the user the names and sources of the datablocks, which will be reused, before committing to that append operation. And in that dialog, the user can see where the “shirt” material is coming from, and decide whether to:

  • Use the one which is already in the scene
  • Make a number padded copy of the one from the appended file
  • Cancel the operation, do manual renaming to avoid conflict, and try again

The reason I think this is important is that I don’t believe there will ever be reliable enough heuristic to determine user’s intention in each case.

Like often, users will quickly name their material based on the most obvious thing, like “Rock”, and then import asset from some other person, who of course also named the very first rock material they made in the scene just “Rock”.

Most users will have asset browser import set to Append & Reuse, because pretty much everyone hates cleaning up scenes full of number padded duplicates, but at the same time, it’s super easy to forget you’ve named some material some name a while ago, or even remember what material names does the asset you are about to import have.

And that brings me to other reason why dialog before confirming the operation is so important - right now, if you make such an easy to do mistake to forget you’ve named material the same name as the material of the asset you are about to import and/or you are not even aware imported asset has material of the same name, Blender will just do the operation, make mess in your scene, and offload the cleanup duty on to you. And this assumes that you are control freak who has constantly open Outliner in Blender file mode to be constantly aware of the data in your scene, which most people do not. So we want Blender here to help us catch the mistake before it happens, instead of being quite about it and let us discover something like this 2 days of work later:
image

Rather than trying some advanced mechanisms to determine asset uniqueness, why not just ask user whether they want the given datablock to be unique or not?

2 Likes

Yeah, that’s what I remember too. I started experimenting with implementing a comparison function already, but didn’t really get anyway yet and started considering other solutions. As for the comparison function itself, I currently think that a nice approach could be to change the blend_write callback into a more general foreach_dna callback. This could be used for writing .blend files, hashing of IDs, comparing IDs and potentially also for the overwrite system if we also integrate rna into the same iterator. It feels like that could be much more efficient than the current rna based iteration in override code.

When considering this solution, I found two issues that I didn’t want to just ignore:

  • Load performance: Loading production files is somewhat slow already. I could imagine it becoming much slower if many data-blocks are loaded potentially many times, just to be compared and discarded again. Would be much better to discard duplicate data-blocks before they are loaded.
  • Ambiguous data-block references: this is a bit more difficult, I’ll try to explain below.

Imagine the following situation:

  1. There is a shot.blend file that links in alice.blend and bob.blend which use the same skin material which is appended into both character files.
  2. When opening shot.blend, there is originally skin [alice.blend] and skin [bob.blend].
  3. Then Blender detects that those are equal, and deduplicates them to something like skin [alice.blend + bob.blend].
  4. Next, a new local Eve character is added in shot.blend directly, which also uses the linked in skin [alice.blend + bob.blend] material.
  5. Now the skin materials in alice.blend and bob.blend are modified independently, so that they don’t match anymore.
  6. When shot.blend is opened again skin [alice.blend] and skin [bob.blend] are not the same anymore, so they can’t be deduplicated.
  7. Eve used to reference skin [alice.blend + bob.blend], but that doesn’t exist anymore. Now it’s not clear what material it should use instead.

That approach does not have the issues mentioned above, but has others:

  • It’s much less transparent to the user when a data-block gets a new identifier.
  • Can’t really update an asset without changing it’s identity, regardless of whether it has been published already or not.
  • Asset data-blocks which indirectly use the modified data-block also need a new UUID. If that’s a problem depends on whether you expect UUIDs to be used just for deduplication, or also for linking specific data-blocks.
  • This creates a system of variations of an asset that is independent of versioning, which might be redundant.

Btw, I don’t think that the asset linking approach explained in the first post has these problems. By default, we would link the asset but also embed it. That’s quite similar to append & reuse, just that you can’t just edit the asset without making it local first. It’s easy enough to add some explicit exceptions to this. For example, when using an object from an asset library, the object itself could be made local already, while the mesh remains linked+embedded and needs to be made local explicitly if the user wants to edit it.

I’m sure we’re talking about the same thing, you mention a few things backwards. No file points to shot.blend, only the other way around. Also character2.blend does not use materials from shot.blend, because that does not contain any materials itself, it just links them in from the character files. Both, character1.blend and character2.blend contain a version of the shirt material that has been appended into them. The material only exists once in each of the character files. When opening shot.blend, the shirt materials from both character files are loaded independently, and Blender does not know whether they are the same. It could compare them, that’s what I talked about above. It could also prompt the user, but it feels quite wrong that you first had to decide whether some potentially large number of data-blocks should be considered the same when opening shot.blend. And then you still have the problem with the ambiguous data-block references I mentioned above.

3 Likes

I agree 100% with this, as someone that has been dealing with file management for a 20+ people team, it gets incredibly tedious to clean up files and keep everything tidy and free of garbage data as it is now, because Blender NEVER asks me if I want to duplicate a datablock with the same name, it just add copies automatically and I’m left with the tedious task of cleaning up afterwards. Even with addons or python scripts to check names and delete duplicates it’s tedious and annoying.
What’s being proposed here sounds like instead of helping will just add a lot more complexity to my work, because there will more be things happening under the hood where I have no control at all, doesn’t sound like an improvement at all.

For example, we have a pretty strict naming convention at the studio, so I’m 100% percent sure that when I see a material that ends with .001, .002, etc it’s because Blender copied the same material instead of using the one that’s already saved in the file. And there’s no way to avoid that when appending a model or making a library override of a linked file. I never get asked if I want to add a bunch of duplicates of a datablock when there’s already one with the same name in the file. That’s the real problem in my opinion.
Having a bunch of materials in different files that are all called the same is not a problem that Blender should try to solve automatically, that’s a problem of organization and it’s up to the user to fix it.

It SHOULD compare them before, just give the user the control here and add a pop-up / confirmation dialog asking if a material/texture/nodegroup should be duplicated, re-used or ignored. No need to come up with extra complex systems under the hood to do something we as users should know is happening in the first place, so we can decide what we want to do with the data. Any automatic decision that Blender makes for us has the risk of being the wrong decision because it doesn’t know our intention in the first place.
Again, this is not a problem that Blender should fix, it’s a problem of poor organization in my opinion, if you have several characters that have a material named shirt, it’s up to you as the user to make it clear that one is char1_shirt and the other one is char2_shirt.
And any person creating assets to sell/share should also be careful enough to give their assets proper names. It’s true that having identical names in materials or nodegroups will happen at any moment if a library gets big enough, but again, it should be up to the user to deal with that and decide what to do, not Blender.

5 Likes