Proposal to Increase MAX_ID_NAME Limit in Blender 5.0

Hi, my name is Michał Krupa. I’ve been using Blender for the past 25 years, and I currently work at CD Projekt RED as an Engineering Manager for the Assets Production Pipeline team. We’re a group of pipeline TDs focused on developing tools to support our artists.

We’ve successfully integrated Blender into our production pipeline for certain asset types—primarily static meshes used in level design. However, we’ve encountered an issue related to mesh naming conventions that’s causing some friction.

Our artists often hit the current 64-character limit for mesh names (MAX_ID_NAME). While they’ve adjusted their naming to avoid exceeding this limit, things get tricky when the assets are exported to Unreal Engine via FBX. Unreal requires specific prefixes (e.g., UBX_ for collision boxes), and the naming format also includes the parent object’s name—quickly pushing us over the limit.

Given this, I’d like to propose increasing the MAX_ID_NAME limit to 258 characters (currently 66) in Blender 5.0. I understand this would break forward compatibility, but I’ve been told there are already compatibility-breaking changes planned for 5.0, so this could be a timely opportunity.

I realize the 64-character limit dates back to an era when every byte counted (I’ve been there 3,000 years ago). But in modern pipelines, where a single 4K RGB texture can use as much memory as ~195,000 object names, this restriction feels increasingly unnecessary.

I’ve already built a working version of Blender with the increased limit, and everything appears to function well. Implementation-wise, the change seems straightforward. My team and I are ready to drive this feature to completion, and we have access to a group of highly skilled C++ developers who can help support it.

So my main question is: Would it be possible to open a discussion about this change, or is the feature list for Blender 5.0 already closed?

20 Likes

Hi best to get in contact with @mont29 . There are already some changes planned to the id datablock.

2 Likes

This was discussed on chat yesterday before this post was made, where it was mentioned we likely wouldn’t accept another static increase, and the field would have to be made of variable length. Little strange that feedback was left out of this post.

4 Likes

I’m the one who wrote “I’m not sure we’d accept a contribution that increases the fixed size rather than making it dynamic” in chat. I meant that somewhat literally, from past developer discussions it’s not clear to me what the consensus would be.

But I personally think it would be ok to increase the fixed size for 5.0 if we have no other solution by then. I would rather not have the perfect be the enemy of the good if this solves real production problems.

11 Likes

I also ran into this limit every once in a while - and would welcome such a change.

It’s not too bad if everything is living in the blender world, but as soon as you have to interchange with other sources this limit can be an issue - espcially on bigger productions with sometimes very long descriptive naming shemes.

2 Likes

Thank you for showing such interest about this matter.
What we need to consider when planning how this might be implemented is the benefit vs the cost of development.
Changing the buffer size for the name to 256 bytes is trivial.
We only have to find all the places where this buffer size has to match the MAX_ID_NAME, and even a mediocre developer like me can do this.
When it comes to changing the actual data type for ID name things get complicated really fast, because a lot of things in Blender holds string data as char*, so the change would be huge.

The benefit of using dynamic buffer for the name is that we can have virtually unlimited name length instead of 256 characters, but would it be actually that useful vs development cost?
Changing the name length from 64 to 128 is a huge gain in usability, changing it above 128 to 256 is somewhat useful (will rarely go above 128), but enabling names above 256 chars to infinity is a very low profit because this would be extremely rare case to exceed this 256 characters limit.

Here’s a branch I’ve made as a test with MAX_ID_NAME set to 258. It seems to work, however I need to check some corner cases like importers/exporters etc.

3 Likes

The downside of a larger non-dynamic name is increased memory usage as more ID data-blocks are allocated. Since data blocks are also copied and created during dependency graph evaluation, I think the impact could be noticeable, so that would be the main benefit of a dynamic string.

Most new strings we add to DNA these days are char * rather than a fixed sized array for this reason. Of course it would be great if we could use std::string or an equivalent, but we are currently limited by DNA and the blend file format.

I don’t have a great sense for the production needs, but I would advocate for a compromise of a 128 length fixed buffer for now. The next step after that should be improving DNA so it can use dynamically sized strings for these existing cases.

6 Likes

@Michal_Krupa Hi! Are you open for submitting the PR with the current code you have? That would help evaluating impact and running some tests.

Seems it also would be nice to know some motivation behind the value of 256 (well, 258, but 2 are for the ID code). Is it the bare minimum you need, or is it something with the idea of being future-proof?

I am not sure dynamic strings for ID names is a realistic goal for 5.0. It doesn’t seem to be very trivial, and is quite risky, and it is not something we were anticipating from project schedule perspective.

While I can see how unideal increase of static size is, it might be the most practical solution. It is better than sticking to the current limit for the next 2 years anyway.

The memory impact is not that bad, I think. For 1M objects it will be 181 MiB extra, which is about 25% of the size of the memory used by all Object structs. Is measurable, but it is not that common to have 1M objects (and no geometry attached to them).

Performance in the DEG I don’t think will be measurable. Surely there is extra memory copy to happen, but it only happens for the initial copy-on-eval. And there are much more things happening during this.

1 Like

My general thoughts, for what it’s worth.

I can see how a 64 character limit could be an issue once going outside or working back and forth with other software, but in someways I’d even question the need to go all the way up to 256 characters.

There’s a point when names just get so long that much of what’s in the middle is pretty much pointless and constantly ignored. You’d have to be endlessly ‘scrolling’ through text fields to read anything. Along with the fact that any naming standard that gets near that length is really just too complex and prone to errors that it would likely cause more problems then it solves.

It reminds me of the old days when general long file names first became a thing and you started to see a Word doc that was basically named the whole first sentence of the included text. half the time it still didn’t really tell you what the document was about and caused other various problems.

Even if ‘dynamic’ is the future, I’m still inclined to think that some sort of max limit would need to be set. If for no other reason then to stop some code error or poorly written python addon or something from generating or naming objects with an ‘infinite’ long name (hence crashing Blender or even the whole OS).

With all that in mind, I’m inclined to think that an increase to 128 (130 with said ID code) should be more then enough and even set as a max limit for future dynamic length going forward.

I just want to point out that when considering whether any length is too short, too long, or just perfect that this will differ by language. Although English only needs one byte per character, users of Cyrillic, Greek, Armenian, Hebrew, and many others require two. Three bytes are needed for each Chinese, Japanese, and Korean character.

5 Likes

So really the character ‘limit’ is still the same, it’s just that the storage requirements could be three times as much.

So that 181 MB extra for 1M objects at 256 characters, ends up being 543 MB. A little more significant then or half that going with 128 characters.

No. Our “character” limit is actually a byte limit. So you can currently enter 64 Latin alphabet
letters, used for English, French, Spanish. Or you can enter 31 Cyrillic characters used for Russian. Or you can enter 21 Chinese ideographs.

1 Like

No, the above proposal is all about the bytes, not characters. Blender currently allows 64 bytes of data for each name.

The proposal would go from currently only allowing (64/3)=21 Japanese characters to allowing (256/3)=85 Japanese characters. i.e. other non-ascii language characters are still be penalized until a fully dynamic approach is possible.

3 Likes

Ahh, OK. Well that’s a bit different then and since it was referred to ‘character’ up to this point, then I’m surprised this issue hasn’t been a significant road block before now, especially with Chinese or Japanese naming.

In which case, yeah, I think it needs to go to 256 bytes.

1 Like

I would mostly align with Sergey here, however I think that even the ‘static increase’ scenario is not as trivial as it looks.

Dynamic strings

This is impossible for Blender 5.0. Blender 4.5 is supposed to be able to open 5.0 blendfiles, i.e. this would have to be fully done code-wise within a month or so. We are already fairly late for the targets agreed on 4 months ago in that regard, adding more (and a fairly complex one at that) is simply not possible at this point.

Static size increase

In general, I think it’s fine to increase the static array size to 258. It indeed seems to be a need for some of our user base, and the consequences are not really big memory wise.

Would go up to 258 mainly for non-latin scripts as pointed out above, although ideograms are usually ‘worth’ several latin letters (i.e. there are typically way less ideograms in Chinese text than latin characters in the equivalent English version), other scripts system like Arabic, Cyrillic or Hebrew indeed will need two bytes per character.

The other, non-ID names should also likely be increased accordingly (names of constraints, modifiers, bones…)? Otherwise, we’ll need a very careful check that there is no hidden assumption that MAX_ID_NAME == MAX_NAME + 2 in our codebase, since afaik this has always been the case so far.

Forward compatibility

This is the most tricky part IMHO. Blender 4.5 is supposed to be able to open 5.0 files, and give usable valid results.

  • For opening, it could be as simple as running an ‘ensure unique names’ check in doversion code. In case several IDs names end up having the same first 66 bytes, this would truncate them further and add the famous unique .001 numeric suffix to them. Should also ensure that results of these renames are predictable and reproducible (think it should be the case already with current code).
  • Not sure how to deal with linked data here? Maybe we just don’t care about it and accept complete breakage of linked data in Blender 4.5 in case linked ID names are longer than 66 bytes?

Wonder if folks who worked on the move from 34 to 66 bytes half an age ago remember anything about this aspect of the problem?

3 Likes

A bad idea worth considering is to add some kind of Display Name attribute in 4.5, and misuse it as a translation layer for future file versions with either longer static names or dynamic length names. It’s an approach that comes with its own pile of issues, and it would introduce artist-facing complexities and UI/UX concerns, but it would be robust and studio-friendly.

It’s something that some studios already do, albeit without adequate UI support, by creating Display Name or Long Name attributes, and performing name substitutions either mid-export with a custom exporter (for FBX pipelines) or post-export via file format SDKs (for USD pipelines). It’s not a good idea, but it’s tractable.

I think we would accept breakage for that case.

3 Likes

Hey - here’s the PR
https://projects.blender.org/blender/blender/pulls/137196

Please keep in mind it’s still a WIP. Currently I’m checking how this will impact things like import/export pipelines. Seems like FBX importer might be a bit tricky because it applies the same rules to both nodes and properties limiting them to 64 chars.

1 Like

Obviously would be nice to have dynamic names everywhere, but especially for data-block names that seems tricky afaik. That’s because we sometimes need to access the data-block name before reading the entire data-block. The existing blo_bhead_id_name gives the pointer to the name without requiring to read other BHead.

For non-data-block names that should not be an issue. Also I’d be more careful with increasing the length there. It’s perfectly valid to have an order of magnitude more modifiers/bones/attributes/… than data-blocks. So if we add a significant amount of memory for each of these, it’s a bigger deal imo.

I think non-data-block names should just become dynamic. While it would be nice to just change char name[64] to char *name in DNA (and the other way around), that’s not entirely trivial to implement (I worked on it in the past).

That said, an alternative solution that should work fine and does not need any changes in 4.5 is to just have two name fields. One char name_legacy[64] and char *name_ptr (or similar). Starting in 5.0, we could use the name_ptr at run-time and copy that value to name_legacy on file-write for forward compatibility.

Always having name_ptr instead of name is a little bit annoying, but it’s probably the best approach to avoid problems with renaming. Also it makes all the usages obvious in the diff (otherwise it’s easy to miss places where the code has to be changed after changing char[64] to char *).

Having this redundancy and an extra 8 byte is not ideal, but to me that’s a better trade off than adding ~200 bytes to each name and still not having a dynamic size.

4 Likes

For non-data-block names that should not be an issue. Also I’d be more careful with increasing the length there. It’s perfectly valid to have an order of magnitude more modifiers/bones/attributes/… than data-blocks. So if we add a significant amount of memory for each of these, it’s a bigger deal imo.

Sure there are more non-data-block names. Being careful is nice, but I am not fully convinced it will be a problem. Besides, I am not sure if the proposal even talks about those cases.

So far we were talking about MAX_ID_NAME. Non-data-block names are managed by MAXBONENAME, MAX_NAME etc. Not sure there is some implicit requirement for them to match?

Applying the dynamic strings to something more than just IDs also increases the complexity of the project.

That said, an alternative solution that should work fine and does not need any changes in 4.5 is to just have two name fields.

I am not sure how does this solve the problem of blo_bhead_id_name.
But even ignoring this, do we realistically have resources to implement dynamic strings for 5.0?
Maybe good first step would be to ensure the current ID::name is used in a way that is compatible with it becoming dynamic in the future.