Native C++ Containers in Blender

I have committed C++ containers such as BLI::Vector and BLI::Set to the master branch in the last couple of months (after discussion with @brecht and @sergey). This has led to confusion among other developers for various reasons. To be fair, it was never discussed or communicated publicly what’s the plan with those data structures.

We can use this thread to discuss whether we want custom/native C++ containers in Blender. If yes, we also have to decide which ones and where they should be used. I’ll start the discussion by providing a couple of general aspects that need to be considered and my current opinion.

Advantages of a native container

  • We can add all the methods we want to make them comfortable to use for the problems we have to solve.
  • We can implement (optional) features such as small object optimization.
  • We have greater control over performance when necessary.
  • We use the same container implementation on all platforms.

Disadvantages of a native container

  • We have to design and implement them ourselves.
  • We have to ensure the correctness ourselves.
  • Custom containers easily need a couple thousand lines of code that we have to maintain ourselves.
  • We have to document the containers ourselves.
  • Developers used to the standard library cannot apply this knowledge directly to our custom containers. So there is more learning overhead.

Alternatives for usage policies of native containers

  • Use standard library containers by default. Only use a native container, when there is a good reason. That reason should be explained in a comment.
  • Use native containers by default. Only use a container from the standard library when there is a good reason. For example, you might have to interact with a third party library that expects standard library containers.
  • Decide on whether standard library or native containers should be used on a per module basis.

Alternatives for naming conventions used by native containers

  • Follow the type and method naming conventions of the standard library whenever possible.
  • Follow the method naming conventions of the standard library, but use different type names (e.g. starting with an upper case).
  • Follow the naming conventions of the standard library only when the behavior of a method matches that of the standard library exactly.
  • Follow naming conventions of some other library, possibly from another language like Python.
  • Follow naming conventions of C libraries that already exist in Blender.
  • Don’t follow any particular naming conventions.

Advantages of wrapping the standard library

  • It’s easier to make it a drop-in replacement for standard library containers.
  • We can still add methods to make them more comfortable to use.
  • Some features such as small object such as small object optimization can be added to some degree.
  • Requires less code than implementing the containers entirely ourselves.
  • It’s easier to write containers that are correct.

Disadvantages of wrapping the standard library

  • No control over how data is laid out in memory.
  • Uses different container implementations for different platforms.

What follows is my opinion on these topics. It might change over the course of the discussion.

  • We can add all the methods we want to make them comfortable to use for the problems we have to solve.
  • We can implement (optional) features such as small object optimization.
  • We have greater control over performance when necessary.

Those aspects are quite important to me when I work with C++. Maybe those are not as important to others.

We use the same container implementation on all platforms.

This is not as important as the others, but it might be nice when we want to compare performance between different platforms.

We have to design and implement them ourselves.

I’ve done a good part of that last year already. My design decisions are open for discussion of course. We are in a state where things can still change relatively easily.

We have to ensure the correctness ourselves.

The containers already have fairly good unit test coverage. Furthermore, I’ve been using them for a couple of months now and I’m not aware of any bugs. While there probably are some hidden bugs still, I think we will find them relatively quickly, when we decide to use the containers more in Blender.

Custom containers easily need a couple thousand lines of code that we have to maintain ourselves.

I think the maintenance cost of native containers is relatively low compared to most other code in Blender. That is because they do not depend on code that changes often.

We have to document the containers ourselves.

Individual methods often have comments already. High level documentation about the inner workings of the containers and when/how they should be used is lacking in many cases. It will be my responsibility to provide that documentation once we’ve come to conclusions in this thread.

Developers used to the standard library cannot apply this knowledge directly to our custom containers. So there is more learning overhead.

This is probably the biggest hurdle. It is yet another set of APIs that has to be learned by potentially every Blender C++ developer. Depending on our naming convention choice, this might be harder or easier. My opinion is probably quite biased here, because I know the API already. Personally, I think that learning something once and then benefiting from that in the long run is almost always worth the effort.

Alternatives for usage policies of native containers

When we have core native containers such as BLI::Vector, I think they should be used everywhere by default. At least in new code. Corresponding standard library containers might be necessary in some cases, but those use cases should be explained in comments.

Alternatives for naming conventions used by native containers

I don’t think we should follow the standard library naming conventions closely. While it is practical to have a drop-in replacement during the code transition period, I don’t think that alone is important enough. Furthermore, using the same method names as the standard library can be dangerous when method is not exactly doing what the standard describes.

I like the idea of loosely following the API names of Python lists, sets and dicts with names such as append, add and `remove. Maybe this is because I know the Python API better. However, that might be the case for many existing Blender developers.

Wrapping standard library containers

Wrapping a simple data structure like std::vector might make sense. Small object storage could be added, but with limitations. I’m not sure if proper copy and move semantics of a std::vector can be implemented using e.g. this Stack Allocator. I think having a small buffer inside the vector by default that works well with moving/copying is useful. We can also make sure that this buffer is not larger than some constant by default using some compile time programming.

Wrapping more complex data structures like std::unordered_set does not really provide a big benefit. We have to change its memory layout fundamentally in ways that are not compatible with the requirements for std::unordered_set.

While it might be useful to wrap some really simple data types to add some additional methods, I don’t think that is a good approach for native containers.

Below are some examples on how the containers that are currently in blenlib can be used. The first two patches also show a nice measurable speedup.

  • D7506: Use BLI::Set instead of std::unordered_set in depsgraph module.
  • D7509: Use BLI::Map instead of GHash for operations_map.
  • D7512: Use BLI::Map for RNANodeQuery.id_data_map_.
  • D7521: Use BLI::Map in RootPChanMap.
  • D7556: Use BLI::Vector for Relations.

@LazyDodo @ideasman42 @julianeisel @mont29 @JeroenBakker @sybren


I’ll recycle my comment from chat yesterday with minor edits.

The issue with the containers that arrived this week:

  1. I’m unsure what problem they solve and why, or when i should be using them, the comments in the headers do not help. Are they general purpose? should we be using them everywhere? or do they solve a very specific problem and should be used in very specific performance critical code? how do i decide?

  2. Given 1, I can’t tell if it’s actually doing what it set out to do, or if it is the best way to go about that.

  3. The argument “custom implementations can give significant performance improvements” (made on chat) is right no doubt about it, however sometimes it also performs worse. Unless we back up this new code with a scenario where it actually helped, what’s the point of having it? we have a solution, the problem!? that’s worries for later! we’ll justify it… eventually…

    This got backed up with some actual numbers, in this post, thanks for that!

  4. Poor naming, something that can potentially do a large allocation on the stack should not be hiding under something as common as vector

I think I mentioned that in the “advantages of a native container” section. These are good reasons for me personally.

While I mentioned my opinion. This is up to debate in this thread.

This is something I can easily fix in the vector class itself. For example, I can either disable the small storage optimization by changing one number. Or I can make sure that it is disabled by default for types that are larger than some threshold.

I have to admit, it was a long post, and may have missed what you were referring to, closest i could find quickly was this

I should use them when there is a “good reason” and it’s somehow now my responsibility to explain it, …wel…thats…not…great… :slight_smile:

While you wrote these classes when there is “good reason” is obviously clear to you, the benefits of a container being stack based , doing small object optimization are obvious to the people who use them on a daily basis. for other developers, or people just getting acquainted with the blender codebase this will decisively be not the case.

Did you notice the comment to code ratio on the Stack allocator you linked? Also notice how they are in clear English and useful rather than

/* Init the flux capacitor. */
void InitFluxCapacitor()


or from our own codebase:

   * Create a vector from an initializer list.
  Vector(std::initializer_list<T> values) : Vector(ArrayRef<T>(values))

Also notice how it starts with a large comment on why one would use it? THAT is exactly the kind of thing i’m after.

It’s not about having native containers or not (calling this native containers, is a really really poor choice of words btw, would have gone with blender specific containers perhaps?) this is about other developers (Including new developers to the codebase) being able to determine, why a piece of code exists and what problem it is solving.

While certain knowledge can/should be expected for certain parts of blender (ie cycles doesn’t have to teach you how a path tracer works) in general purpose library like blenlib this is not the case and we have to step up our documentation efforts.

IMHO everything but the kitchen sink helper classes are the absolute worst, “so yes, this class will use the stack unless it’s too big, then it won’t, it’s an arbitrary threshold, also there’s an option to disable this” soo…anyhow… use it wisely…

just split it off, StackVector has business existing on it’s own, doesn’t need to be folded into anything else!

If you want a version that automatically switches gears from stack to heap, sweet, just don’t name it vector there’s expectations on how a vector works by its name.

Our vector is like the standard vector, just has nicer helper methods: Awesome!
Our vector works differently than a standard vector: it’s not just a vector, stop calling it that.

TL; DR: Am not opposed to having own custom containers when it really makes sense, but would keep rather std one as much as possible, in general.

I would vote for using std library as much as possible. Not doing so in C++ these days sounds a bit like not using libc in C to me… Sure, you can always do better with custom code in some cases, but question is: is the extra work worth it?

On one end you have a very widely used set of utils, constantly checked and developed, for “free”. Versus our own private thousands lines of code to write, maintain, etc.

There is also the question of generality of those “native” containers - they may perform very well in a very specific use case, but actually be worse than std in others. Then what? We keep using both on case-by-case basis? Because I really have a hard time believing code in std is not state-of-the-art one tbh…

To me it’s a bit like using tbb versus our own tasks system. It just adds extra work, and while you can tweak it as much as you like, it’s disputable whether you will actually get better performances and security in general/average.

I’ve used some of the new containers, and find them nice and easy to use. I particularly like VectorSet, which I’m not sure there is a std:: equivalent of.

There’s something to be said for having a small set of data types that follow a naming and usage philosophy that is consistent and easy to remember. If we mix some std:: ones with some of our own, that feels to add some cognitive burden. Especially if the purposes fulfilled by them seem very overlapping - you have to keep thinking about “which do I choose and why”? It’d be nicer if, every time you wanted, say, a hashmap, you always reach for the same thing in Blender code. I’m not much fussed about what the decisions are, but rather, that a decision is made and we document and stick to it.

Jacque’s listed advantages of a Blender-specific container were:

  • We can add all the methods we want to make them comfortable to use for the problems we have to solve.
  • We can implement (optional) features such as small object optimization.
  • We have greater control over performance when necessary.
  • We use the same container implementation on all platforms.

These are all kind of theoretical - “we can…” . What would make this debate more concrete would be examples of “we need to…”. So for instance, if performance is an issue – and it might very well be – what about the standard library containers impedes performance. Brecht had mentioned that vectors need to initialize (clear) their memory; maybe some guarantees that the standard container has to make are too expensive for us, performance-wise? Or maybe certain usages require too many allocations – what are examples of these that can’t be solved by using things like reserve()? And for methods needed to “make them comfortable to use for the problems we have to solve” - what are examples of such that are not easily solved by adding extra functions that take the collection as an argument, or build up another type that uses one or more std:: types for implementation?

Maybe it’s not the place for this thread, but I’d also like to see some discussion of questions like: should we aggressively use returning container types as function return values, assuming that move semantics will “do the right thing”? More generally, how much of the advice in do we agree with and want to adopt as coding style rules?

1 Like

Some concrete examples off the top of my head, where our own data structures can do better than STL.


  • (unordered) std::map and std::set perform a memory allocation for every element. Every memory allocation/free call has a measurable overhead, and many small allocation are worse for performance due to cache misses and fragmentation. This problem can’t be fully solved with a custom allocator. The reason behind this is that STL requires iterators to remain valid for all operations except erase.

  • Small buffer optimizations help to avoid memory allocations entirely. For example when need a vector, set or map per face. We know that most faces will have few vertices (e.g. < 128) which can easily be allocated on the stack, and when there are more vertices the overhead of the memory allocation is amortized. With small buffer optimization we can use the same data types and API for both stack and heap uses cases, whereas we have different data structures in C for them now.

  • The same small buffer optimization for strings makes code both faster and easier to read. For example in optimized file parsers you will rarely see std::string since the memory allocations come with a significant cost, and instead there is more complicated and error prone C string handling. However with this optimization you can have the convenience of std::string with performance very close to C strings.

We could spend time benchmarking to prove all this. But heap memory allocations being slow is something that if you’ve spent a significant time optimizing C/C++ code, you’ve spent significant time implementing solutions to avoid them.


  • std::vector on Windows would crash in debug using &myvector[0] on empty vectors, while it would work fine on Linux. There are other subtle compatibility differences that I’ve run into in the past and spent significant time debugging, where crashes only happen on one platform.

  • The std::vector grow factor was (not sure if it still is) different for Visual Studio and GCC, which gave different performance.


  • std::unordered_set only gained a contains(x) function in C++20, which is probably the operation I use most. set.find(x) != set.end() is not as readable.

  • std::string has few convenience functions for string manipulation, it could be much easier to use. This could be done by subclassing.

  • Debugging crashes or stepping through code in STL types is a pain. The header files are very hard to read and complicated due to their generality. The current BLI type implementations are much more readable.


STL is not state of the art, because of backwards compatibility. It is stuck with design decisions made when CPUs were largely single core and memory access was much faster relative to CPU cycles than it is now.

TBB on the other hand was designed with modern CPU architectures in mind.

To be fair, most std::string implementations have small buffer optimization nowadays (at least 16 chars in the cases I checked). The argument for parsers still holds though. In my recent parser experiments I’m not using std::string at all. Instead I do everything with BLI::StringRef which is even cheaper and still convenient.

I agree with your other points.

I’ve used it in the past for optimizing a parser with a “small buffer” size of 1024 bytes. For really small strings std::string is indeed fine.

BLI::StringRef can avoid copies entirely. But you do have to be certain that the memory you are referencing remains valid, which isn’t always easy.

While having all the reasons we would want something listed here on the forum is awesome, some of the points are rather illuminating (and it’s hard to disagree with some of them). However new developer stumbling into BLI_vector.hh are greeted with (i’m fully aware i’m being a little facetious here)

/** \file
 * \ingroup bli
 * This vector wraps a dynamically sized array of a specific type. It supports small object
 * optimization. That means, when the vector only contains a few elements, no memory allocation is
 * performed. Instead, those elements are stored directly in the vector.

I’ll pick it apart bit by bit

This vector wraps a dynamically sized array of a specific type

Unlike other vectors?

It supports small object optimization, That means,

Awesome! we’re explaining stuff!

when the vector only contains a few elements,

How many is a few?

no memory allocation is performed.

Dubious claim, things need to be stored somewhere, something is allocating memory for them.

Instead, those elements are stored directly in the vector.

ah yes “in the vector”, unsure how this is different to a novice programmer than “storing things inside an std::vector”

So things i learned

  1. it’s a vector, i was confused by the file being called BLI_vector.hh but glad we cleared up this looming confusion
  2. it’s able to store ‘a few’ things “without allocating memory”
  3. there’s a magic place "inside the vector’ where things are stored, that i can only assume is different from the things inside of a regular std::vector

Things i have not learned

  • What problem is this header is solving?
  • Should i use it? should i use it all the time?
  • When should i NOT be using it?
  • Why is doing memory allocations a bad thing?
  • How does the magic place work?
  • Should i be making a vector with a ‘few’ 500k objects? i can with any other vector, so that ought to be no issue here right?

now lets compare that to boost::static_vector which i can only assume is kinda the same thing.

//!@brief A variable-size array container with fixed capacity.
//!static_vector is a sequence container like boost::container::vector with contiguous storage that can
//!change in size, along with the static allocation, low overhead, and fixed capacity of boost::array.
//!A static_vector is a sequence that supports random access to elements, constant time insertion and
//!removal of elements at the end, and linear time insertion and removal of elements at the beginning or
//!in the middle. The number of elements in a static_vector may vary dynamically up to a fixed capacity
//!because elements are stored within the object itself similarly to an array. However, objects are
//!initialized as they are inserted into static_vector unlike C arrays or std::array which must construct
//!all elements on instantiation. The behavior of static_vector enables the use of statically allocated
//!elements in cases with complex object lifetime requirements that would otherwise not be trivially

I hope i don’t have the point out the differences with our documentation at this point.

I honestly don’t care if we make a container that stores information in smoke signals and reads it back using a smoke detector. But we HAVE to do a better job at documenting

  • once we decide to make something our selves rather than using what stl has to offer what lead to the decision. It’s not obvious for the people not solving the problem you are currently knee deep in, and it won’t be obvious to any new developers getting started with our codebase.

  • How the thing works, the whole no memory is allocated is rubbish, if it stores something, memory is allocated for it, it’s as easy as that, if it’s done statically at compile time, awesome, just say that, obscuring what is really happening with ‘magic’ helps no-one

and for the love of all that is good, if it behaves differently than something every general purpose C++ developer is familiar with, don’t give it the same name.

For this specific header

  • We decided that the Small Object Optimization idiom is a good solution for a problem we were having
  • Didn’t document why this solution was chosen
  • What the problem was
  • Assumed everyone is familiar with this idiom.
  • Implemented it, behaves different than a vector, but we called it a vector anyhow
  • Comments in the header are ‘not great’ bordering on straight up lies.
  • Justified it with “look it works! we used it here and here! it’s measurably better”

I’m not throwing a stink to keep us from having shiny new toys, some of the reasons seem rather valid, I’m not even questioning results are had with these new classes. But we have to be better, at making this knowledge available across developers, and not just in the minds of a few enlightened devs.

1 Like

I’ll echo most of this feedback. Let’s take Abseil as an example (Google library; has custom containers exactly for the many of the reasons listed above etc.) has docs that explains their philosophy[1] and when/why[2] which structures should be used[3].

It’s pretty well “complained” how the stl containers are pretty awful in circles outside blender. I think everyone here is saying these things are fine, but want more guidance on the philosophy and the when/why to use these (in all cases? in some?).

For philosophy, consider BLI::StringRef and std::string_view: What is the philosophy here? Should the interface match so that std::string_view can one day be used in the future? Was string_view consulted? Is string_view never intended to be used? If so, why? Whatever the case is, just write it down so new folks understand the why.


Use absl::flat_hash_map most of the time.

Prefer absl::flat_hash_map or absl::flat_hash_set in most new code (see above).
1 Like

About the original post: I think it’s a bit hard to read & understand, as some lists (like the pros and cons lists) are like summations, whereas other lists (like “Usage policies for using native containers”) read more like a list of alternatives. This means that every list has to be read to detemine its semantics, before being able to understand its contents.

This is also what I see in the new BLI:: container implementations. Things are similar to what I’ve seen before (in std::) but they are not the same.

I fully agree with @LazyDodo that documentation is paramount. In my opinion we should have a more scientific approach to documentation, where terminology is well defined before actually trying to convey ideas. As a counter-example, this is the first sentence of the documentation for BLI::StringRef:

A StringRef is a pointer to a string somewhere in memory.

Here the word “string” is used. This being C++ code, it is already undefined what this word means as it could equally refer to either std::string or char *. It’s just a small thing, and of course it can be looked up by reading the implementation, but that’s not my point. In my opinion, documentation should be understandable when you read it sequentially. At a minimum, the introductionary paragraph should be understandable without having to read anything in the rest of the file.

I do think it’s important to cater for those developers that are familiar with the std:: containers. The current implementation the BLI:: containers use the term “remove” rather than “erase”, and searching BLI_set.hh for the word “erase” yields no results. Someone familiar with std::set would thus have to read through all the function names to find the correct one. I would at least describe the remove() function with “Erase the value from the set” so that a search for the std:: function name will produce a meaningful result. I think small changes like this will really help.

Although I like the Python naming more than the C++ naming (“remove” makes more sense to me than “erase”), I don’t feel we should silently borrow names from a different language and expect everybody to just know what names to look for.

As a final note about documentation, because @LazyDodo already made excellent points and @jacqueslucke already said he’d be working on it, what I feel is missing now is some description of the complexity of the functions. For example, the BLI::VectorSet, are inserts still O(log n) like std::set? Or are they O(1) like hash sets? Or O(n) because the entire vector needs to be scanned? Without such information, it is hard to determine which implementation to use.

I agree with that.

Yes. Vectors in geometry are quite different. Given that Blender uses a lot of geometry, I feel that the word “vector” is already overloaded, and that any BLI_vector.whatever file should start by explaining that it is a container (like it is doing now).



I agree with all the points made about the documentation. I’ll take your feedback into account and fix that.

StringRef vs. string_view

Those types serve the same purpose indeed. One additional requirement I had was that I had to reference null terminated strings as well. This is necessary, because the C++ code still has to interact with C code a lot in Blender. For that purpose there is BLI::StringRefNull (and yes, I’ll make that more clear in comments). Furthermore, the ability to add convenience functions to a type that references a char array is quite useful.

We could use std::string_view instead of BLI::StringRef, when we switch to C++17. I don’t see a big need for that though. This type is quite simple and interaction with libraries using string_view should be easy, because we can add implicit conversions.


I’m absolutely fine with not calling it “Vector”. I don’t like the name much either. Suggestions?

If we keep the name, the distinction to geometrical vectors has to be made clear in comments indeed.


For example, the BLI::VectorSet, are inserts still O(log n) like std::set? Or are they O(1) like hash sets? Or O(n) because the entire vector needs to be scanned?

The running time of inserts in BLI::VectorSet is O(1) in expectation. Yes, I’ll add that information to the comment.


Ah, I’m late to the party.

I have mixed feeling about this - on the one hand, STL containers do have their problems that were already outlined in this thread, on the other hand, reinventing the wheel always comes with the risk of reinventing bugs. There are always corner cases waiting to surprise you.

My humble idea would be to make the custom implementations’ interface as close to their STL equivalents as possible, so that it’s easy to swap them out if necessary for debugging or benchmark purposes (ideal case: a single #define to switch between container implementations). It would also make it easy for newcomers to our code base, since they could rely on their existing STL knowledge rather than having to absorb a new nomenclature. (For example, with custom dynamic array implementations, I have to always double-check whether any size/resize operates on the reserved memory size or the number of items).

I think the name ‘vector’ is fine, since any C++ programmer would instantly recognize what it is and does.

I submitted updated versions of the data structures for review. Please have a look at

Thanks for the feedback so far. I tried to incorporate a lot of it.