Blender HQ is a place to held workshops in a face-to-face setting and fly in the necessary people. In March 2023 there was a module workshop about Eevee/Viewport planning for 2023. What made it really amazing was that Apple was able to fly in and participate in the workshops. It allowed faster feedback and more technical workshops, covering feature prototyping and debugging tools.
This post will give an overview of the topics the module will be focusing on in the upcoming year.
Eevee has been evolving since its introduction in Blender 2.80. The goal has been to make it viable both for asset creation and final rendering, and to support a wide range of workflows. However, thanks to the latest hardware innovations, many new techniques have become viable, and Eevee can take advantage of them. Eevee-next core architecture will create a solid base for the many new features to come.
Many areas of Eevee still has to be ported to Eevee-next. Some parts can be ported with minimum changes, other requires rewrites to fit in the new architecture. An example of this is Ambient Occlusion.
Eevee-next uses features that are not available in the Metal backend yet. These feature include support for Storage Buffer Shader Objects (SSBO) and indirect drawing/compute.
In order to support Global Illumination (GI) we have to chose an algorithm. In the past several real-time GI algorithms have been discussed, but all had some limitations that doesn’t match the expected quality or compatibility. The main challenges is that the chosen solution should support on GPU’s that don’t have hardware ray tracing support.
Want to learn more, check Eevee-next development task.
Besides supporting more nodes, the focus will be on adding support for render passes in the viewport compositor. This requires changes to the RenderEngine API in order to support render passes from Cycles and other render engines.
Want to learn more, check Viewport Compositing development task.
Due to the deprecation of OpenGL, more bugs appear in drivers lately where workarounds needs to be engineered in Blender. Vulkan drivers have a validation process in place to reduce the differences between drivers and Blender will be responsible to implement large parts of the driver in the application giving us more control of what actually happens.
When finished it will enable taking more advantage of new features of GPU’s and lower time spent on platform support. More information can be found in Vulkan development task.
During editing/animation data needs to be made accessible to GPU’s in order to display it. This is a known bottleneck in Blender and is continuously being improved. As the module team has grown the last year we can also spent more time in researching how to reduce the current bottlenecks.
A known bottleneck is that data is stored in an CPU side staging buffer. With modern GPU backends this intermediate buffer can be skipped, reducing data duplication. This will reduce the required memory and time to create and duplicate the data.