Usage of `dynamic_cast` in Blender code

The topic of this thread is to decide whether using dynamic_cast (in the context of a polymorphic C++ types hierarchy) should be discouraged altogether in Blender code, or to define the situations where it would be acceptable or even encouraged to use it.

This topic rose up from discussions in PR 124405 and some informal follow-up chats with some Blender devs. Given that the feedback was very mixed, it felt worth having a discussion here about it.

When looking online on e.g. stackoverflow about this topic, there also seems to be quite mixed views on usage of dynamic_cast. Three main reasons are often given for why it is usually discouraged:

  • It seems to have a reputation of only being needed in ‘badly designed’ code.
  • It relies on RTTI.
  • It is slower than static_cast (and gets slower the more the type hierarchy gets complex).

NOTE: Blender codebase already has a fair amount of usages of dynamic_cast, including recent areas like implicit sharing or geometry nodes…

Alternative Approaches

The two main alternatives to using dynamic_cast, while preserving type safety, are:

  • Virtual casting methods.
  • Asserting that the result of static_cast and dynamic_cast are the same.

I’ll share my personal view and understanding on this topic here. In summary, I do not see any good reason to prevent using dynamic_cast in most of the code. It’s a cheap, easy, safe way to add some type-safety checks. The only case where it can become a problem is when millions of castings need to be performed per second.

The terrible performances of static_cast in debug build context (when the Undefined Behavior Sanitizer is enabled) would also be a reason to prefer dynamic_cast in most cases to me (although admittedly, the same reasoning applies to this case as to the usage of dynamic_cast in release builds - the overhead should not matter anyway in ‘common’ cases).

Bad Design

I find this point fairly moot and irrelevant. This usually applies to designs where an abstract base type is implemented by one or more sub-types, in some form of ‘interface’ paradigm. I think that the ‘bad design’ here is not the need to use dynamic_cast, but the need to down-cast in general.


Code using dynamic_cast also typically has its own handling of type info to decide to which sub-type to cast a base pointer. As such, I see dynamic_cast as a simple, fairly efficient and safe way to double-check and validate the correctness of the ‘type management’ performed by the code.

Even if there is no validation of the cast pointer, dynamic_cast will create an immediate crash situation, fairly clear and easy to understand. On the other hand, static_cast will give ‘something’ that may or may not crash, immediately or ten function calls later, etc.


Dynamic (down) casting does require some additional operations on the CPU. This cost depends on the complexity and deepness of the type hierarchy. On modern hardware and compiler, a dynamic_cast typically takes a few nanoseconds to a few tens of nanoseconds.

While most benchmark available online are fairly old (10 years or more), this one is more recent (alas it does not seem to specify the compiler used).

I also ran a small experiment with the (relatively simple) new uiItem type hierarchy from PR 124405 on my machine (Ryzen 9 5950X, Debian testing, clang 16).

Results with a release build are in the 5 nanoseconds area per dynamic_cast on average (and virtually 0 nanoseconds for static_cast as expected).
A debug build (with undefined behavior sanitizer) gives… interesting results: dynamic_cast is about 20 nanosecond on average, and static_cast is 3600 nanoseconds! With a few runtime error: reports about invalid down-casting.

As a conclusion regarding performances, while it should be avoided in critical areas of the code (where there would be millions of castings per seconds), dynamic_cast typically does not have a significant impact on performances. In debugging context, when using an undefined behavior sanitizer, it can actually be several orders of magnitude more efficient than a static_cast.

Alternative Approaches

While using static_cast and asserting that its result matches dynamic_cast is likely a good thing to do in performance critical areas, I am not convinced the added verbosity (and lesser bug-discoverability) is worth it. Not to mention the consequences of using an UBSAN build on static_cast.

The virtual methods approach offers the same level of type-safety than dynamic_cast, but with better performances (about twice as fast in simple cases, and performances remain relatively constant when the type hierarchy complexity grows). However, it adds a lot of clutter to the code, and requires way more maintenance from the dev team, especially as the type hierarchy gets more complex.

I am even tempted to consider that, if using dynamic_cast becomes a problem and implementing virtual casting methods becomes a solution, then it is a good sign that the current polymorphism-based design is a wrong choice.

1 Like

It is not intrinsic to the dynamic_cast to have the scent of “smelly code”, is more like any time you need to go down by the hierarchy. So to me neither dynamic_cast nor static_cast are the winners when it comes to down-casting.

I think there is a missing alternative solution: just a virtual method, which takes care of some “business logic” (which is a bit different from virtual casting methods).

Having proper hierarchy with proper abstraction and API will always be the winner for me, as then you can much more clearly see early on that some code needs adjustment. Trying to hunt all places in the code which needs to be updated once hierarchy has hanged is not fun or robust process.

I think this has quite good summary of this matter:

The C++ core guidelines are pretty clear here: C.146: Use dynamic_cast where class hierarchy navigation is unavoidable

Here are my 2 cents.


Forbidding dynamic_cast would be a strange choice to me. It’s a feature that increases type safety, avoids errors and makes it easier to add new derived types. Flat out forbidding it for performance reasons is premature micro optimization for practically no benefit. (Although you can pad yourself on the back for saving a few hundred nanoseconds, maybe even microseconds in otherwise badly optimized code that isn’t even a bottleneck </snarky>). Of course there are cases where it can matter (measure first).

Code Design

Sure in an ideal world you don’t need to use dynamic_cast… or any cast at all. In practice you do. There is a lot of Blender code that defeats all design dogmatism.

For example there’s a lot of C style inheritance in Blender, with manual type tagging. Converting these types to C++ style polymorphic types is an easy change with a number of benefits. Converting (spaghetti) code to also use virtual functions to replace all type checks is often much more difficult, I’ve spent plenty of time on this in the past. The former should be possible without requiring the latter. Otherwise certain improvements will just not happen.

BTW, it’s been pointed out that Google’s guidelines are to not use RTTI, i.e. no dynamic_cast. However, the guideline ends by saying this:

Do not hand-implement an RTTI-like workaround. The arguments against RTTI apply just as much to workarounds like class hierarchies with type tags. Moreover, workarounds disguise your true intent.

This is essentially an argument against much of Blender’s existing code design (“class hierarchies with type tags”). Following this guideline would require major refactors. Forbidding dynamic_cast in favor of other casts only hides the issue (and doesn’t follow the guideline either).


For new code/designs dynamic_cast shouldn’t be needed, for old code it’s preferred for polymorphic types. Completely forbidding it doesn’t make sense.

My preference would be a guideline along the lines of:
“Avoid designs where dynamic_cast is needed. Otherwise prefer it over manual type tagging.”

This follows the C++ core guidelines and moves towards the Google guidelines too.


I don’t have a strong opinion on this.
I find somewhat unintuitive to do a dynamic_cast without checking the result, though.

If the goal is to add an extra “just in case” check, maybe we could have something like a checked_cast that uses dynamic_cast in debug builds and static_cast in release builds.
So dynamic_cast can be left only for the cases where the type is truly unknown.


That sounds like what “A debug build (with undefined behavior sanitizer)” does. It basically does the dynamic_cast and also an assert if it fails.

Yea, do not use dynamic_cast in code that cannot handle a nullptr result. All you have done is replace one crash with another. Putting it in an if or passing it as an argument to a function that does something useful and sensible with nullptr is required.

1 Like

You have not only replaced one crash with another by using dynamic_cast. You have replaced an undefined behavior, very likely a vulnerability, by a very well defined, instant and unambiguous crash. This is both safer, easier to detect, and easier to fix.


I like that proposal for the guideline, but would rephrase it that way:

Avoid designs where down-casting is needed. Otherwise, prefer using dynamic_cast over static_cast, unless there are proven critical performance impact when using the former.

1 Like

Yes indeed. Say an ID is statically cast to the wrong derived type. From there there’s not much missing to end up with a faulty ID written to files. It may or may not crash eventually. dynamic_cast would’ve crashed right away.

There must be an assert in there. It serves the purpose of showing to a programmer that the dynamic cast should never fail. If they think it is possible for it to fail they will do pretty awful things with the code to work around that possibility.

Yea if you are paranoid you can follow the assert by a dynamic cast instead of a static one. I have generally seen crashes by bad casts. Your example of bad ids in files has to be handled anyway, it is best to assume anything in a file may be garbage. Any id’s have to be determined to really point to an object. After that it might be a good idea to use dynamic cast to make sure it is the right object. I have generally not seen any improvement to putting dynamic cast after an assert. There is also the need to make the arguments to both line up, but a macro that does both the assert and dynamic cast can fix this.

That’s where dynamic_cast<T&> should be used. It will throw a std::bad_cast exception and protect from undefined behavior that way. Which is safer than an assert that only triggers in debug builds.

Probably our guidelines should also include these points from the C++ core guidlines:

Thanks I did not know that! I never used dynamic_cast except with pointers.

What do you think about the following guideline (which I think summarizes what has been said)? I found it easiest to represent it as a decision tree instead of as a paragraph.

Instead of having the decision for performance sensitive code we could also have a new method that does one or the other depending on whether one is in a debug build.

Mermaid Code for graph
  can_avoid_downcast["Is design without down-casting appropriate?\nE.g. using virtual methods."]
  use_no_cast["Don't use explicit casting."]
  is_type_check_necessary["Is a type check necessary?"]
  use_dynamic_cast_ptr["Use dynamic_cast with a pointer type.\nAlways check the returned pointer."]
  is_performance_sensitive["Is performance sensitive?"]
  use_static_cast["Use static_cast for best performance.\nHard to find bug if assumption is wrong."]
  use_dynamic_cast_ref["Use dynamic_cast with a reference type.\nThrows an exception if type is wrong."]
  can_avoid_downcast --"yes"--> use_no_cast
  can_avoid_downcast --"no"--> is_type_check_necessary
  is_type_check_necessary --"yes"--> use_dynamic_cast_ptr
  is_type_check_necessary --"no"--> is_performance_sensitive
  is_performance_sensitive --"yes"--> use_static_cast
  is_performance_sensitive --"no"--> use_dynamic_cast_ref
1 Like

This is definitely most visual, and, in my opinion, most practical suggestion for the guideline.

Agrees as well, looks great.