2023-03-10 Eevee/Viewport Workshop (Planning 2023)

Blender HQ is a place to held workshops in a face-to-face setting and fly in the necessary people. In March 2023 there was a module workshop about Eevee/Viewport planning for 2023. What made it really amazing was that Apple was able to fly in and participate in the workshops. It allowed faster feedback and more technical workshops, covering feature prototyping and debugging tools.

This post will give an overview of the topics the module will be focusing on in the upcoming year.


Eevee has been evolving since its introduction in Blender 2.80. The goal has been to make it viable both for asset creation and final rendering, and to support a wide range of workflows. However, thanks to the latest hardware innovations, many new techniques have become viable, and Eevee can take advantage of them. Eevee-next core architecture will create a solid base for the many new features to come.

Many areas of Eevee still has to be ported to Eevee-next. Some parts can be ported with minimum changes, other requires rewrites to fit in the new architecture. An example of this is Ambient Occlusion.

Eevee-next uses features that are not available in the Metal backend yet. These feature include support for Storage Buffer Shader Objects (SSBO) and indirect drawing/compute.

In order to support Global Illumination (GI) we have to chose an algorithm. In the past several real-time GI algorithms have been discussed, but all had some limitations that doesn’t match the expected quality or compatibility. The main challenges is that the chosen solution should support on GPU’s that don’t have hardware ray tracing support.

Want to learn more, check Eevee-next development task.

Viewport Compositor

Besides supporting more nodes, the focus will be on adding support for render passes in the viewport compositor. This requires changes to the RenderEngine API in order to support render passes from Cycles and other render engines.

Want to learn more, check Viewport Compositing development task.

Vulkan Backend

Due to the deprecation of OpenGL, more bugs appear in drivers lately where workarounds needs to be engineered in Blender. Vulkan drivers have a validation process in place to reduce the differences between drivers and Blender will be responsible to implement large parts of the driver in the application giving us more control of what actually happens.

When finished it will enable taking more advantage of new features of GPU’s and lower time spent on platform support. More information can be found in Vulkan development task.


During editing/animation data needs to be made accessible to GPU’s in order to display it. This is a known bottleneck in Blender and is continuously being improved. As the module team has grown the last year we can also spent more time in researching how to reduce the current bottlenecks.

A known bottleneck is that data is stored in an CPU side staging buffer. With modern GPU backends this intermediate buffer can be skipped, reducing data duplication. This will reduce the required memory and time to create and duplicate the data.


Why one? have several options (even at once with granular control of what elements contributes to what or is excluded) and make it extensible.

Please support a wider and deeper granularity of transparency & draw order.

wider: per scene / collection / object / instance / mesh island / triangle / texel / pixel

deeper: better control over the mapping from fully transparent to fully opaque (sometimes you want more precision in a certain range) and set or procedural overriding of draw order.

Maybe this is something more for Blender Labs, though a bit more posibility and ux consistency then what we have now would be nice for Eevee Next.

Stay on topic. I believe you’re enthusiastic about the projects we are doing, but this is not a place for feature requests. If you’re a developer who wants to participate, you can. If you’re an artist requesting for features use right-click select.

This is important as the current schedule is already packed and requires our full attention to be on par with Eevee. Any additional development would impact the planning and would slip any initial delivery of Eevee-next and we believe that slipping won’t benefit users at all.

After the initial release development of features will continue, but as their is already a huge back-log and just a few developers able to work on it, we are really strict to priorities.


I think this has been mentioned before in another thread, but Godot engine’s SDFGI was developed for a similar reason - a solution that works on hardware that doesn’t support raytracing. It seems to work pretty well, and is already open source, so I think it’s a nice fit.

One problem is that SDFGI (at least, in Godot) results in light leaks when walls are too thin, but the developer working on it seems certain that can be mitigated or fixed. Most easy-to-use real time GI solutions have problems like this, anyway. Lumen features light leaks, too.

Also, as far as I’m aware - at the moment it doesn’t support certain things, like dynamic objects contributing to GI (i.e. emissive objects that move, though dynamic lights themselves are supported), but that appears to be on track to improve too. The developer (Juan Linietsky) is likely to be open to communication, if there are questions or otherwise.


What is estimate of GI algorithm? Will it make in 3.6 or 4.0 ?

Expecting Eevee next to be in 3.6 seems very unrealistic.

Eevee-next, GI or any other feature will land in a release when it is finished and stable enough. We don’t provide exact versions as that is often communicated incorrectly, changed due to other priorities and confuses people when not met.

Our focus arethese development targets and along the way these features will land in main branch. When the minimum feature set is met it will be made available in an official release. Until then, provide feedback using alpha builds.


Looking forward to testing eevee-next for macOS once the remaining features are implemented!