Blender Vulkan - Status report

Last week there was some movement on discussing Vulkan Blender integration. This post will give the status of the Vulkan project and updated reasoning.

Currently there are no active developers working on Vulkan integration in Blender. Although many decisions are driven by this API. If you have some ideas/comments or want to participate in such a project please get into contact with us by leaving a reply or via #eevee-viewport-module on Blender chat.

Why do we want to support Vulkan in Blender?

  • OpenGL isn’t developed anymore. Vulkan is replacing it and since the introduction of Vulkan 1.0 no core changes have been made to the OpenGL standard. Vulkan during its announcement back in 2015 was actually named OpenGL Next. New technologies (For example GPU Raytracing API, but also AR/VR standards) we want to benefit from are only available/standardized in Vulkan. .
  • OpenGL specification exists as a collection of documents, Some required, others are optional. Implementations of the specification differs per vendor. AMD tries to follow the specification to the letter, NVIDIA is more relaxed. Vulkan standard includes a detailed test suite (Software) that are required for drivers to pass. Still Vulkan has optional specifications, but Vendors are more eager to align on them.
  • OpenGL drivers work internally different. To optimize for NVIDIA is different than for AMD. For example NVIDIA drivers uses a threaded GPU upload for buffers, where we don’t have control over. AMD do not clear newly created buffers, and is slow to clear them manually. OpenGL drivers have bugs. Blender has workarounds that could lead to less performance or disabled features. Vulkan solves this as it is a low level API, more suited to how the actual hardware works. This reduces the amount of decisions that a driver developer needs to make.

What has been done so far?

  • Between 2019 and now we have designed/engineered a system that would be able to add Vulkan to Blender. The core parts of this system has already been implemented. These are listed below.

  • Blender 2.8 introduced the Draw Manager. Draw manager is used to draw 3d viewports and nowadays also Image/UV editor and the compositor backdrop. The draw manager is structured with similar data types as Vulkan.

  • In 2020 all communication between Blender and GPU backend is abstracted away behind an API (GPU module). This would allow us to have different GPU backends. One for OpenGL, Vulkan or Metal. This makes sure that feature development can be done in a backend agnostic way. This was tested with an initial Vulkan implementation and is currently being used by the Metal port.

  • Vulkan and OpenGL both use GLSL, but are not compatible. Metal has its own shading language. In 2021/2022 a system was introduced to cross-compile GLSL to any of the backends. This has been done with Vulkan in mind, although not used yet. For the Metal back-end we are testing that the mechanism works.

  • BGL is replaced by the GPU module and add-ons developers are requested to port their add-ons to the GPU module. This would make the transition to other GPU backends easier. We expect that after migrating to the GPU module only the shader needs to be tweaked to make sure that the cross compilation to other backends work.

What still needs to be done?

  • Implement GHOST_ContextVK and a selector to start Blender with OpenGL or Vulkan.

  • Implement GPU Vulkan backend. This is most of the work. GPU data types should get their Vulkan specific implementation. (Backend, Compute, Context, Framebuffer, Index Buffer, Query, Shader, State, Storage buffer, Texture, Vertex Array, Vertex Buffer, Debug)

Although this doesn’t seems to be a lot of work, keep in mind that 10 times more lines of code needs to be writte for Vulkan than OpenGL; even to display a triangle on the screen. And in case of executing this for Blender you won’t be able to see any pixels, until the final phase of the project.

Open Topics

AFAIK OpenSubDiv doesn’t support Vulkan yet. There might be some ways around it (sharing data between an OpenGL and Vulkan context), but that would be far from ideal and error prone.

June 2023 update: Blender Vulkan - Status report - #19 by Jeroen-Bakker


Thanks for the nice overview and status update. Much informative and appreciated.


exciting, after the port won’t blender be able to run on sideloaded oculus quest 2?

We don’t know if that would work out if the box. If it supports openxr it might. But best to test after the project is finished.

Note we are building a highway. A car builder can build a car that can drive on that highway. Asking a roadbuilder about if it supports a specific electric car should be asked to the car builder.


Can you implement some functions natively from vulkan in Blender, for exemple i know since 1 weeks vulkan suppoorts officialy Mesh shading (Meshlets), but can you think we can develop a half-geometry data like use basic in edit mods / objecct mode etc. And compile one version works only with meshlets for improve render-time for exemple ?

Or develop a full Triangle clustering occlusion like UE ? (but is the same like meshlets basicly)


Depends on how widely mesh shaders will be supported by all the platforms. Currently MeshShaders are an optional extension so not yet. If in the future this is supported by more platforms and we have a use for it we will. But until then I doubt that we will add an API for platform specific features due to maintenance reasons.

Your second question is about occlusion culling and adaptive rendering. We are keeping our eyes open and discuss upcoming technologies. Note that when adapting technologies you also adapt the stuff that isn’t working that well. The meshlets in UE is AFAIK for static and non-transparent meshes and requires preprocessing. As Blender is an 3d content editor we don’t make the distinction between static and dynamic geometry. For this we need to find a solution that works for well for our users.


Thanks for the report. What is the rough timeline (2024, 2027, 2030… etc) for moving to full Vulkan support?


No timeline because no one is working on it

I think there is another important reason for a vk backend for Blender.
Without Vulkan you can’t even use subgroups(wave in HLSL) in compute shaders, which is already practically & widely used in rendering applications & games (Unreal Engine, Unity, recent 3A games). They use subgroups to efficiently implement advanced GPGPU algorithms(parallel scan, segmented scan, reduction, histogram, radix sort, compaction, etc). As Blender thriving to chase up the industry standards, this is indeed a very fundamental and urgent topic.


Yes there are many performance and functionality stuff that should be added, but perhaps for timing this can be only be done after the base is ready. I do think Blender functionality should be added that should be supported on all back-ends, natively or with a fall-back. This as a key principle so content that is created can at least work on all official supported platforms in the same quality. Performance can differ, of course. Game-engines often make other decision (visual differences) which is far from ideal for Blender. What you’re proposing seems like access patterns to improve performance, which is great , and should be possible to add with fall-back for platforms that don’t support this. Other principle is code readability, which allows more devs to work in these areas.

Our goal for 2022 was to be able to use the shader_builder to validate cross compilation of GLSL. This was committed to master last month and allows developers to continue adding features with validation to OpenGL, Metal and Vulkan. Currently there the vulkan library/headers and shaderc has been added to the pre-compiled libs. The CMake-files still needs to be adjusted.

In first half of 2023 I want to work on adding support for compute shaders, including buffers etc. When that is done I think I can give a better estimate what needs to be done for the graphics pipeline.

Next few weeks I want to make the design and plans for the compute pipeline more concrete. I want to free up some time to make sure I can focus at least 40% of my time on this project. Community help (feedback, tips, or actual code) is very appreciated!


I do agree with that basic rendering features should be supported in different platforms; Game engines also have to consider this - especially for mobile games.

For Eevee engine, which targeted to (semi-)real-time rendering on PC platform, the technology should not differ that much to those used in modern 3A games. For example, as a amateur 3d art enthusiast, I think in the case of PBR, Lumen and Nanite together is a very competitive alternative to Eevee, and Unity is also a worthy rival for stylized rendering and prototyping.

Hence it is natural to do things similar to these game engines: shift to a modern GPU-driven rendering framework: virtual shadow mapping, GPU culling, or even Nanite-style visibility buffering & soft raster.

To achieve above GPU-driven rendering functionalities, one will need GPGPU primitives (parallel scan, reduce, histogram, etc) as building blocks, which require proper supports for compute shaders (subgroup/wave intrinsics, atomics, lds/tgsm, etc). For example, efficient GPU-culling requires block-level parallel scan and global atomics; GPU bvh construction/update needs parallel radix sort. These are not “access patterns”, they are fundamental operators to compose any parallel computing algorithm.


(Note: Entirely self-serving and possibly slightly entitled comment, sorry)
I’m keen to see Vulkan support as I have recently upgraded to a laptop with an Intel Iris Xe GPU. It’s great on games that support OpenGL, and has Vulkan support, but isn’t up to the oneAPI of the bigger Intel dedicated cards. I was a little disappointed that Blender chose to drop OpenGL before Vulkan was ready to replace it, and I’m wondering how many others are feeling this, or is it just me. (Not really wanting to miss out on new Blender features, just because I have to “Cycles” on CPU :slightly_smiling_face: )
Correction: I meant Open CL, not GL above, which I guess makes my point about Vulkan slightly defunct. (Thanks to those who pointed out my error)

Blender chose to drop OpenGL? What are you on about? it’s still the default backend on all platforms.


I think you’re misinterpreting the information you have received or didn’t got the information from the people who actually know. OpenGL isn’t dropped and won’t be dropped unless there is replacement for all platforms.

It might be that you’re confusing OpenGL with OpenCL. But that is off-topic for this thread.


Thanks for the clarification - I was indeed in error, and I should have said Open CL. Apologies for the confusion. @LazyDodo - thanks for the correction. I obviously need to do more homework :slight_smile:

OpenCL was a cycles backend, what is being discussed here is a vulkan backend for the UI/EEVEE/WorkBench engines, having support there will not add any vulkan support to cycles.

1 Like

So - In short, I won’t be able to use my Iris-Xe to accelerate Cycles renders at all.

Not by the work being talked about here indeed.

1 Like

Last half year more time was spent on the Vulkan project then originally anticipated. I will outline the approach we followed, what the current state is and known future quirks and bottlenecks.


Past milestones

  1. being able to cross compile shaders between OpenGL, Metal and Vulkan. [Goal reached December 2022]

  2. being able to run compute shaders test cases. [Goal reached February 2023]

  3. memory and buffer management [Goal reached March 2023]

  4. being able to start Blender (no viewport) [Goal reached May 2023]

  5. Most of workbench working [Goal reached May 2023]

Current milestones

  1. Make sure that other renderengines are able to draw something. Might still have many artifacts, but at least you would be able to see something.
  • Workbench-next, Eevee-next, Overlay-next, Grease Pencil, Cycles Viewport, Image engine

Future milestones

After everything can be seen, more time will be spent on adding support for ‘all’ platforms and improving the performance.

Current development is mostly done on discrete GPUs. On a weekly basis I switch between NVIDIA, AMD and Intel GPUs, to make sure that we don’t implement something that cannot be done on another brand of GPUs. This is mostly done on Linux. Of course the Blender community is sending in patches when something is not working on Windows. There are many configurations not working and they should be checked how we can get them fixed.

Vulkan can deliver excellent performance in a gaming pipeline, Blender isn’t a game and therefore isn’t (and cannot) be organized in a similar way as games. For now the performance is only around 5-10% of what I expect the final version will be. Mostly because I have been very conservative when sending commands to the GPU. When commands are schedule the backend will wait until that command is send to the GPU, then it waits until the GPU has finished the command, before scheduling of the next command can start. Vulkan can speed up when sending in multiple commands in a single go, and add barriers between commands that write to a resource, where the resource is being read by another command.

Until now we didn’t implement any performance improvements as most documentation on the internet is about gaming pipelines where developers have more control overthe full pipeline. We have to find our own path how to solve this. This path is getting clearer with each commit we make. Current idea is to have a custom command encoding that will send the commands and barriers at certain moments.

Potential Risks


The initial plan is to have a usable experimental backend that can be downloaded and used to collect feedback at the end of 2023. Depending on initial feedback it might even be available in an official release. This means that we would not recommend to use it as the default backend yet.

There are still a lot of tasks that needs to be implemented in the upcoming period; however I have been asked to help with Eevee-next and that might shift the planning 1 or 2 months and that adds pressure to the original planning. It is currently unclear what the impact of this priority change has on the Vulkan Project.

Supporting data types

During the course of the project there has been quite a few gotchas. One of them are if a GPU support a certain data type (GPU_RGB16) and usage (Framebuffer attachment), it can be that the platform would return an error that that the combination isn’t supported. In order to continue we sometimes need to reconfigure those buffers so we don’t need to transform the content of the buffer all the time.

Inside Blender we have more control over the actual buffer data and can update the original data or even phase out problematic data types, but this isn’t always possible. Blender Python API allows add-on developers still to create those buffers. The vulkan backend currently detects invalid usages and tries to transform the data to a compatible format when uploading the data to the GPU.

It is unclear if during the project we will need to implement more work-arounds to ensure that ‘most’ used data-type/usage combinations would be working from an add-on developer point of view.

Final note

A huge thanks to the community developers for lending a hand, providing patches and sharing knowledge. The implementation has been changed based on what we have heard back from you!

During the Blender Conference I will also share more information about the project and go more into the details of Vulkan.


Last week a big milestone has been reached. The vulkan backend has been enabled
as an experimental
option. It will be available in alpha builds on Linux and Windows.

This option is highly experimental and enabled to get some insight
on platform support. Don’t expect a fully working Blender
yet. Also don’t expect it to have usable performance.

What is known to not work?

  • OCIO textures are not supported on Intel and AMD GPUs. sRGB/Standard is supported
    on those platforms.
  • AMD Polaris based GPUs on Linux will generate a crash when drawing the 3d cursor as it
    doesn’t support the needed vertex format. Comment out DRW_draw_cursor in DRW_draw_region_info.
  • The colors in the node editor and sequencer are of as sRGB viewports aren’t detected correctly.
  • The image / UV editor isn’t working as many texture formats haven’t been tested yet. Some
    tweaks are also needed to do correct depth testing.
  • 3D Viewport is known to be flickering. Sometimes workbench doesn’t display anything.
  • 3D Viewport wireframe will crash as it uses a framebuffer with gaps between color attachments,
    which isn’t supported yet. (#113141 - Vulkan: Support for Framebuffer with Missing Attachments - blender - Blender Projects)
  • Rotate the view widget is partially drawn due to incompatible depth clipping.
  • GPU Selection isn’t working. It is expected to be solved when Overlay-Next will become the
    default engine. For now disable GPU depth picking in the preferences.
  • Cycles/EEVEE are known to not work with Vulkan yet. Cycles requires Vulkan Pixel Buffer.
    Cuda ↔ Vulkan interop might require a different approach than OpenGL as Vulkan doesn’t allow
    importing memory from a Cuda context. EEVEE uses features that aren’t available yet in the backend
  • Workbench is working, except Workbench shadows.
  • EEVEE-Next basics are working. Shadows, lights are known to be not working. Materials/Shading
    works on a single object. Changes are expected in EEVEE-Next that will break Vulkan compatibility
    in the near future.
  • Systems with multiple GPUs is not working.
  • Wayland support is in review and should land before this PR (#113007 - Vulkan: Wayland Windowing - blender - Blender Projects)
  • OpenXR hasn’t been modified and is expected to fail.
  • The backend is very strict when mis-using the GPU module. In debug builds it may crash
    on asserts.
  • Older drivers/GPUs might not have all the features that we require. The workarounds
    for the missing features still need to be implemented.

A word about performance

In the project planning we focus first on stability and platform support. The performance of Vulkan is
around 20% of what we want to achieve. The reason is that each command sent to the
GPU is done one at a time. The implementation even waits until we have feedback that the GPU
is idle again.

Geometry is currently stored in System RAM. The GPU will read and cache the data when
accessing geometry. This slows down when using objects with much geometry.
Some performance features like MDI (Multi-Draw-Indirect) hasn’t been implemented and
falls back to Single Draw Indirect.

Why enable it is an experimental option?

  • Ensures that new features are being tested with Vulkan
  • Ensure that building with Vulkan is possible on supported platforms
  • Give feedback from developers if Vulkan can run on their system or that
    there are special cases that we are not aware of. Main development
    environment has been Linux/X11 with occasionally testing using Windows.
  • Validate Add-ons that use the gpu module.
  • Possible to enable GLSL validation on the buildbot. (Needs more work).
  • Does it compile on all machines or does it require more changes to cmake

How can the backend be enabled?

Currently the Vulkan backend can be enabled per Blender session by starting
with the command line argument --gpu-backend vulkan. In the future, after
the backend is proven to work, we will add a user preference to switch between
OpenGL and Vulkan.

How can you help?

  • Download the latest alpha build or compile it your self. The windows build seems to be not available at this time on the buildbot.
  • Start blender with blender --factory-startup --gpu-backend vulkan.
  • Does it start? Great! You can click around but expect missing features and drawing artifacts.
  • It crashes on startup? No so great report a bug! I am open to fix crashes on startup.
  • It shows a message that the GPU isn’t capable of running Blender, but I have a Vulkan 1.2 capable system? get in contact with me at Blender Chat We will keep track of platforms that should be supported and find out if more workarounds are needed.