2023-05-22 Eevee/Viewport Module Meeting

Practical Info

This is a weekly video chat meeting for planning and discussion of Blender Eevee/viewport module development. Any contributor (developer, UI/UX designer, writer, …) working on Eevee/viewport in Blender is welcome to join.

For users and other interested parties, we ask to read the meeting notes instead so that the meeting can remain focused.

  • Google Meet
  • Next Meeting: June 5 22, 2023, 11:30 AM to 12:00 PM Amsterdam Time (Your local time: 2023-06-05T09:30:00Z2023-06-05T10:00:00Z)


  • Clement
  • Jeroen
  • Michael
  • Miguel


  • Eevee-next development are a bit behind schedule. We discussed during the meeting how to address this issue. Taking over code reviews, adding more people on the development. Jeroen and Miguel are available to help, but the topic should be clear how the solution would work in order to be effective.


  • Volumes landed in main. There is an issue where transparent surfaces behind volume objects are not rendered correctly that should still be investigated.


  • Development continued on Overlay-next.

Metal Backend

  • Improved depth bias to reduce tile artifacts on Apple Sylicon GPUs.
  • Eevee-next requires some features in the backend
    • Currently baking irradiance caches fails on some devices. It isn’t clear if the issues are related to the specific branch.
    • Image atomics are required, but not yet supported by the backend. We should add an easy solution first and after that we have time to iterate on the approach. The idea is to add a 2d buffer for the atomics.
  • Apple is narrowing down the issue why AMD GPUs can have stability issues. They is an cause found, but it is still unclear how to work around it.

Vulkan Backend

  • Adding support for Workbench. Still work in progress
    • Improved stability of vulkan when descriptors where fragmented/not available.
    • Add support for texture formats that uses non-standard low precision floating points. Using a template it is possible to convert from a regular float to any low precision floats and use conversions that is appropriate for textures. Like clamping to max values, support non-signed floating points.
    • Add support for reading back depth textures and sub-areas of textures
    • Improved the surface selection to select unsigned normalized SRGB surfaces.
    • Not yet working:
      • Hair rendering (requires texel buffer support)
      • Studio lights have inverted normals.
      • Volume hasn’t been tested yet.
      • Point clouds hasn’t been tested yet.
      • Shadow rendering (requires stencil buffers).
      • Depth of field (requires mip mapping).

Jeroen, what is Overlay-Next… is it like a render engine of its own. This is the first time I’m hearing about it. Couldn’t find more information about it since its a generic term with a lot of similar irrelevant results on google.

1 Like

My best guess is the real-time compositor but since that has a name already in also a little confused

Its a rewrite of how all the overlays are displayed (relationship lines, outlines and that). It was integrated in workbench i think and now is separated for easier handling and faster drawing since not everything is needed in the overlays that workbench can do.
Thats what i know at least.

Since Blender 2.8 the viewport consist out of multiple draw engine that are composited on to of each other. The overlay engine is responsible to draw all editor/viewport related stuff that isn’t part of Workbech/Eevee/Cycles or any other render engine. This includes the drawing edit mode, selection outlines, relationship lines, centerpoints to name a few.

Blender 2.8 targetted OpenGL 3.3, in the near future we are targetting OpenGL4.3 and Vulkan. Those new backends have many features we can benefit from, but to do so we need to rewrite all the draw engines including eevee, workbench, overlay, selection.

In Source/EEVEE & Viewport/Color Management Drawing Pipeline - Blender Developer Wiki there is an example what the overlay engine renders. What is then layed on top of a cycles render. Perhaps you know that before Blender 2.8 Cycles didn’t had any overlays rendered.

This is the main focus of the team right now.


amd gpu open has just released a good looking ssgi on their website called AMD Capsaicin Framework, I wonder if it can be used for eevee?


Thanks for pointing out this research paper. We are familiar with most open source and whitepapers, including this one. But it isn’t the time to discuss what GI solution could be used with Eevee. What do you exactly mean with “can be used for eevee”. Eevee is open source so anyone is able to create a version and incorporate a whitepaper to it. If it will become the default GI solution in Eevee is a different topic (answer is most likely not as we have other plans, requirements).

We are heavily in development of our GI approach and we don’t have the time in the near term to do any research, impact analysis on other GI-whitepapers or deviate from our current plans. The goal is to have a solution for Blender 4.0 and it is hard to get the minimum set work as intended with the number of people we have now. Finding our current implementation already took months of research, testing and engineering to make it work with our architecture. Implementing papers to render engines like Eevee isn’t straight forward or easy task. Looking at this time to a different implementation is not feasible.

There are also other requirements that should be considered:

  • any solution should be compatible with all platforms that we support or have a fall back in place. According to their documentation it requires rt hardware.
  • Eventual results may vary as they are targeting <5ms draw times and we have a larger drawing budget.
  • Blender typically draws more geometry then game engines as game engines typically preprocesses their geometry for faster drawing. This could also influence the algorithm. Blender cannot do extensive preprocessing as users are editing the models that are drawn. For game engines this is typically done by the game developer. They mention that they don’t do preprocessing, so that is good, but also mention a 2 level caching mechanism, unclear is if that is to limited, or how the caching influences the final results.

The GI-1.0 technique takes advantage of hardware-accelerated ray tracing in modern GPUs but intelligently uses additional lighting structures to reduce the number of required rays and enable evaluation of indirect lighting entirely at runtime on current hardware.