Last week there was some movement on discussing Vulkan Blender integration. This post will give the status of the Vulkan project and updated reasoning.
Currently there are no active developers working on Vulkan integration in Blender. Although many decisions are driven by this API. If you have some ideas/comments or want to participate in such a project please get into contact with us by leaving a reply or via #eevee-viewport-module on Blender chat.
Why do we want to support Vulkan in Blender?
OpenGL isn’t developed anymore. Vulkan is replacing it and since the introduction of Vulkan 1.0 no core changes have been made to the OpenGL standard. Vulkan during its announcement back in 2015 was actually named OpenGL Next. New technologies (For example GPU Raytracing API, but also AR/VR standards) we want to benefit from are only available/standardized in Vulkan. .
OpenGL specification exists as a collection of documents, Some required, others are optional. Implementations of the specification differs per vendor. AMD tries to follow the specification to the letter, NVIDIA is more relaxed. Vulkan standard includes a detailed test suite (Software) that are required for drivers to pass. Still Vulkan has optional specifications, but Vendors are more eager to align on them.
OpenGL drivers work internally different. To optimize for NVIDIA is different than for AMD. For example NVIDIA drivers uses a threaded GPU upload for buffers, where we don’t have control over. AMD do not clear newly created buffers, and is slow to clear them manually. OpenGL drivers have bugs. Blender has workarounds that could lead to less performance or disabled features. Vulkan solves this as it is a low level API, more suited to how the actual hardware works. This reduces the amount of decisions that a driver developer needs to make.
What has been done so far?
Between 2019 and now we have designed/engineered a system that would be able to add Vulkan to Blender. The core parts of this system has already been implemented. These are listed below.
Blender 2.8 introduced the Draw Manager. Draw manager is used to draw 3d viewports and nowadays also Image/UV editor and the compositor backdrop. The draw manager is structured with similar data types as Vulkan.
In 2020 all communication between Blender and GPU backend is abstracted away behind an API (GPU module). This would allow us to have different GPU backends. One for OpenGL, Vulkan or Metal. This makes sure that feature development can be done in a backend agnostic way. This was tested with an initial Vulkan implementation and is currently being used by the Metal port.
Vulkan and OpenGL both use GLSL, but are not compatible. Metal has its own shading language. In 2021/2022 a system was introduced to cross-compile GLSL to any of the backends. This has been done with Vulkan in mind, although not used yet. For the Metal back-end we are testing that the mechanism works.
BGL is replaced by the GPU module and add-ons developers are requested to port their add-ons to the GPU module. This would make the transition to other GPU backends easier. We expect that after migrating to the GPU module only the shader needs to be tweaked to make sure that the cross compilation to other backends work.
What still needs to be done?
Implement GHOST_ContextVK and a selector to start Blender with OpenGL or Vulkan.
Implement GPU Vulkan backend. This is most of the work. GPU data types should get their Vulkan specific implementation. (Backend, Compute, Context, Framebuffer, Index Buffer, Query, Shader, State, Storage buffer, Texture, Vertex Array, Vertex Buffer, Debug)
Although this doesn’t seems to be a lot of work, keep in mind that 10 times more lines of code needs to be writte for Vulkan than OpenGL; even to display a triangle on the screen. And in case of executing this for Blender you won’t be able to see any pixels, until the final phase of the project.
AFAIK OpenSubDiv doesn’t support Vulkan yet. There might be some ways around it (sharing data between an OpenGL and Vulkan context), but that would be far from ideal and error prone.
Can you implement some functions natively from vulkan in Blender, for exemple i know since 1 weeks vulkan suppoorts officialy Mesh shading (Meshlets), but can you think we can develop a half-geometry data like use basic in edit mods / objecct mode etc. And compile one version works only with meshlets for improve render-time for exemple ?
Or develop a full Triangle clustering occlusion like UE ? (but is the same like meshlets basicly)
Depends on how widely mesh shaders will be supported by all the platforms. Currently MeshShaders are an optional extension so not yet. If in the future this is supported by more platforms and we have a use for it we will. But until then I doubt that we will add an API for platform specific features due to maintenance reasons.
Your second question is about occlusion culling and adaptive rendering. We are keeping our eyes open and discuss upcoming technologies. Note that when adapting technologies you also adapt the stuff that isn’t working that well. The meshlets in UE is AFAIK for static and non-transparent meshes and requires preprocessing. As Blender is an 3d content editor we don’t make the distinction between static and dynamic geometry. For this we need to find a solution that works for well for our users.
I think there is another important reason for a vk backend for Blender.
Without Vulkan you can’t even use subgroups(wave in HLSL) in compute shaders, which is already practically & widely used in rendering applications & games (Unreal Engine, Unity, recent 3A games). They use subgroups to efficiently implement advanced GPGPU algorithms(parallel scan, segmented scan, reduction, histogram, radix sort, compaction, etc). As Blender thriving to chase up the industry standards, this is indeed a very fundamental and urgent topic.
Yes there are many performance and functionality stuff that should be added, but perhaps for timing this can be only be done after the base is ready. I do think Blender functionality should be added that should be supported on all back-ends, natively or with a fall-back. This as a key principle so content that is created can at least work on all official supported platforms in the same quality. Performance can differ, of course. Game-engines often make other decision (visual differences) which is far from ideal for Blender. What you’re proposing seems like access patterns to improve performance, which is great , and should be possible to add with fall-back for platforms that don’t support this. Other principle is code readability, which allows more devs to work in these areas.
Our goal for 2022 was to be able to use the shader_builder to validate cross compilation of GLSL. This was committed to master last month and allows developers to continue adding features with validation to OpenGL, Metal and Vulkan. Currently there the vulkan library/headers and shaderc has been added to the pre-compiled libs. The CMake-files still needs to be adjusted.
In first half of 2023 I want to work on adding support for compute shaders, including buffers etc. When that is done I think I can give a better estimate what needs to be done for the graphics pipeline.
Next few weeks I want to make the design and plans for the compute pipeline more concrete. I want to free up some time to make sure I can focus at least 40% of my time on this project. Community help (feedback, tips, or actual code) is very appreciated!
I do agree with that basic rendering features should be supported in different platforms; Game engines also have to consider this - especially for mobile games.
For Eevee engine, which targeted to (semi-)real-time rendering on PC platform, the technology should not differ that much to those used in modern 3A games. For example, as a amateur 3d art enthusiast, I think in the case of PBR, Lumen and Nanite together is a very competitive alternative to Eevee, and Unity is also a worthy rival for stylized rendering and prototyping.
Hence it is natural to do things similar to these game engines: shift to a modern GPU-driven rendering framework: virtual shadow mapping, GPU culling, or even Nanite-style visibility buffering & soft raster.
To achieve above GPU-driven rendering functionalities, one will need GPGPU primitives (parallel scan, reduce, histogram, etc) as building blocks, which require proper supports for compute shaders (subgroup/wave intrinsics, atomics, lds/tgsm, etc). For example, efficient GPU-culling requires block-level parallel scan and global atomics; GPU bvh construction/update needs parallel radix sort. These are not “access patterns”, they are fundamental operators to compose any parallel computing algorithm.
(Note: Entirely self-serving and possibly slightly entitled comment, sorry)
I’m keen to see Vulkan support as I have recently upgraded to a laptop with an Intel Iris Xe GPU. It’s great on games that support OpenGL, and has Vulkan support, but isn’t up to the oneAPI of the bigger Intel dedicated cards. I was a little disappointed that Blender chose to drop OpenGL before Vulkan was ready to replace it, and I’m wondering how many others are feeling this, or is it just me. (Not really wanting to miss out on new Blender features, just because I have to “Cycles” on CPU )
Correction: I meant Open CL, not GL above, which I guess makes my point about Vulkan slightly defunct. (Thanks to those who pointed out my error)
I think you’re misinterpreting the information you have received or didn’t got the information from the people who actually know. OpenGL isn’t dropped and won’t be dropped unless there is replacement for all platforms.
It might be that you’re confusing OpenGL with OpenCL. But that is off-topic for this thread.