This topic is about communicating about the state of the EEVEE rewrite.
For about a year now, EEVEE rewrite has been conducted in a dedicated branch. We are now entering a phase where we think it is better to merge it to master to better review each feature and make the developpement less abstract.
This merge will be conducted in steps. Each steps will be commited to master and will be ready to be tested under the new (temporary) render engine named “EEVEE next”. The engine is hidden under an experimental option that you need to enable in the preference panel under the experimental tab (only visible if developer extra is also enabled).
It is important to notice that EEVEE-next will require OpenGL 4.3 support minimum. We currently do not enforce this requirement for the rest of Blender but this is likely going to change in the next release (or the one after).
For each milestones, I will be collecting feedback. However, I do not expect to deliver a fully featured engine anytime soon given my other duties.
The zero milestone was to merge the GPU and DRW module changes. This has already been done over time and the changes were quite local to the modules.
The first milestone is to merge the codegen changes. This means making the new BSDF paradigm compatible with the old eevee codebase. This part of the code cannot be branched depending on the render engine. This should be testable with the old engine and should not bring any regression.
Other milestones are yet to be defined because of their order and dependency. Simply having something drawn will take a lot of sub-modules to be ported.
I would also like to state that even after all this time, there are still features in research phase. So the plan is not to merge a full featured engine, but to move its development into master while polishing what has already been done.
Thanks for the update and all the good work on Blender!
Quick question: with MacOS openGL frozen at 4.1, will EEVEE “experimental” implementation come later to Macos in the main branch, once the Vulkan implementation with metal backend is done?
Precision: MacOS openGL is frozen a 3.3 on some Apple hardware we support. So this is the common denominator for now.
If the system does not meet the requirement, an error message will be displayed at the top of the viewport.
So until there is full Metal support, this will be the state of EEVEE Next on MacOS.
Apple is currently providing patches to have a fully working Metal backend. This should provide better performance than of a MoltenVK implementation in some areas.
Do I get this statement right that at the moment EEVEE Next (in master) does not render anything at all, because nothing has been implemented yet? No critique, just curious if that’s what you’re saying.
Yes. For the moment it does not draw anything. And even if you may see some commits to the new codebase, it will not render anything until first milestone is finished.
I remember following the development of Eevee years ago after finding out about your “real-time Cycles” OpenGL build of Blender. I remember it took so long just to add the simplest of features. That’s what this reminds me of. Are you essentially rebuilding the entire engine from scratch without using any of the existing code? It’s amazing how much work you guys put in just to get Blender to the next level. I’m totally looking forward to seeing this new engine one day!
Vulkan implementation postponed to next year?
I mean complete deleting OpenGL, and Windows builds will be work using Vulkan as base API.
Looks like it won’t be before Blender 4.0.
It glads me a lot to hear that theres focus on performance early on, personally the biggest issue that Ive had with the current Eevee implementation is how GPU hungry it quickly gets as soon as either shader graphs are even remotely complex or as soon as you start getting up in the millions for polycounts (in addition to using a few lights and expensive rendering effects that all add up).
Im just gonna drop my personal wishlist for things that would make Eevee more valuable for me:
World space curvature input (similar to the curvature in solid view) for user created shaders, useful for edgewear and alike
Custom GLSL shader support (for those that dont like node pasta ). Ideally these shaders should be able to be written in an as renderer agnostic way as possible, allowing artists to port there shaders to 3rd party renderers easily, or use shaders from websites like shadertoys
Truly real-time - no sampling & drawing the same frame several times required, to provide artist with fast iteration times when working with the render engine, no distracting flickering or having to stop your viewport to see what a “final render” might look like. Everything the user sees in the viewport should closely resemble a final render, preferably be identical.
A renderer that scales well with large and compelx scenes. Virtual geometry, virtual textures, smart draw call batching and texture based lightmaps would all be steps in this direction.
Ability to bake meshes (its okay if its horribly slow, its just that now that we’re getting multi-threaded shader compilation I would generally speaking prefer to work in a real time renderer to create my textures from shaders rather than Cycles)
Ability to preview as many render passes individually as possible, like normal/ao/diffuse/albedo/metallic/roughness/specularity/transparency/height . Im the creator of a (lesser known) addon that lets you fetch image data like this in a similar fashion to “GrabDoc” and it was significantly harder to get data like what I just mentioned than what I think it should be. https://github.com/ItsCubeTime/FastPBR . The purpose is to be able to create geometry in Blender and then extract maps like ao/curvature/transparency/normal/height/matID and then bring those maps into a software like substance painter to add micro detailing. However it wasnt very straight forward for me however to be able to get these relatively simple maps, to get shader related maps I would have to modify the shader graphs, to get curvature I saw no option but to switch to something like solid view and modify a bunch of settings there, to get normal maps I had to use a workbench matcap (etc). I would like to see a unified way of getting all of these maps in a simple manner.
I do reuse a lot of code. But a lot of the core components that have been hacked over the years to accommodate EEVEE have now been rewritten to not be hacky and support more extensive feature.
Now that I think of it, I’m just taking the time to do some of the ideas I had in mind at the start of the first EEVEE.
Vulkan is not a top priority as the incentives to adopt it are pretty low. The only interesting feature that would be nice for us is hardware raytracing and maybe mesh shaders.
However, it is definitely on the roadmap and we plan to replace OpenGL completely with it.
But I do note that this question is kind of off topic. So I won’t be discussing more about Vulkan here.
Your post falls into the feature request category which is not really appropriate for this topic I created for feedback about what I’m working on.
The first two features are already planned but we lack the resources (developer time) to bring them up to life.
The next two are just not possible with the resources we have. Every real-time engine went into the temporal denoising to increase fidelity. As for the optimizations, many are not possible but we are always looking at how we can improve on specific areas.
I guess the ability to bake meshes is a typo and you meant bake textures maps. This is not part of EEVEE but may be in the new texturing workflow roadmap.
The last one is too specific. But might also be related to the texturing workflow.
Thanks for the response, sorry if this wasnt the right place for this sort of feedback.
I would argue that it isnt necessarily a very specific usecase however, its common in all industries to want access to separate render passes for compositing, maybe I just put it in an odd way describing more specifically what I would use it for.
And except for the internal coding improvements we EEVEE Next users can expect. I say this because if the majority of requests is to finally integrate vulkan and raytracing and it is not in the short term planning.