Very happy to see this come into master as an opt-in experimental - we will be able to keep building custom in-house Blender builds for production from master and occasional test this out! I’ll be keeping an eye on it. Big Eevee user here!
Hi, perhaps this is dumb question, but I haven’t been able to find an answer on developers portal nor blog posts, so I’d like to ask here.
In Eevee next, will following “issue” be addressed?
- during viewport playback or when navigating viewport it allows only one sample, therefore antialiasing is gone and AO and other effects dont work properly. (Denoise doesn’t help much, it erases a lot of details).
I’m using blender for interactive presentations alot and this is a real downer. Also I believe it’s very relevant for VR\AR as well.
I’ve heard about possible transition to deferred rendering, so perhaps that would somehow help, or allow for more samples\frame during playback with strong graphic card?
Thanks for any hints.
After vulkan replaces opengl in the future, does it mean that eevee2.0 still needs a major rewrite?
No. There’s already a layer between blender and opengl called Gawain(?) which means they can switch the back end to Vulcan (or metal) without any changes, but the Vulcan back end still needs a lot of work
This is really great news, thanks a lot! I hope there will be a thread here at devtalk like the one for Cycles Metal.
this is what’s happening.
eevee is GPU only renderer
only thing that would change is minimum OpenGL requrement to 4.3, so 2013< gpu (so at least Radeon HD 5000, GTX 400. Intel 8th Gen’s HD Graphics and above unless there are some bugs in drivers in those old unsupported GPU)
minimum, but is it p̶l̶a̶n̶n̶e̶d expected to have features required by higher openGL levels?
Like bindless textures that are in 4.4(?) or sparse textures?
Gawain was a comparability layer between legacy open gl and core profile. The abstraction layer you’re referring to is called gpu module. When adding other backend still ( a ton) of work needs to be done as every backend comes with different requirements.
Moderation note: Please note that this topic is not the proper place for feature requests or speculations. Please keep it on topic.
Bit of a question about this Eevee-next.
Will it support deferred rendering or something similar? I would like to see support for more than 128 active lights in Eevee.
Here is a statement from the task on the “EEVEE Rewrite”. https://developer.blender.org/T93220
Main features are:
- High Light count support: Lights are now efficiently culled and there is virtually no limits to the maximum number of lights in a scene.
Thank you for this information. I am going through the issue now and I really like what I see
As always it’s time and money.
As i recall correctly, work on Lumen were done by 4 people and took 2 years until was made public year ago, I didn’t stumped across numbers for Nanite but i guess they are similar or higher, few people working just on that one feature for multiple years. And even those are not even done yet, and those people still work on them.
EEVEE have only one main developer so that single feature eg. Lumen would take ~12 years with no other Flareon features.
And for money part, BF can’t just hire multiple people on single eevee feature while others like UV have ~zero.
Anyway Eevee have tons of potential features that as i guess Clément is aware them, but its just not possible to implement all of them due to lack of workforce.
Moderation notice 1: PLAY NICE! ALL OF YOU! I’ve hidden all the bickering
Moderation notice 2: Thread is on lock down until @ThomasDinges decides what to do with this thread.
We have reached the first milestone!!
Adapting the new codegen to the old EEVEE codebase was more work that I originaly anticipated.
A core design shift was to make the codegen render engine agnostic. Meaning there should be no special behavior depending on the render engine that runs it. At first glance it does not look like a major issue for Blender because EEVEE is the only engine to use the codegen. But this became important with the recent decision to keep EEVEE-next and the original EEVEE implementation side by side. Also this makes other engine implementation easier.
Moreover, this delegate the geometry support to the engine itself, making the support for other geometry types easier.
Technically, the new codegen now only produces functions strings that are then used by the render engine however it wants. The strings are now shader stage agnostic, meaning they can be used inside vertex shaders (i.e: true procedural displacement support for EEVEE-next).
Porting to the old codebase was also a test for that and allowed to polish the design, furthering the separation between the engine and the codegen.
I cleaned up some of the most tricky stuff we were doing to support displacement as bump mapping. The choice between fine bump or fast bump (2.80 blocky style) is now the responsibility of the engine and may become a performance option for EEVEE-next. This is also important for upscalling which I am aiming to support in EEVEE-next.
The Shader-to-RGB node is also now supported and engine agnostic. The engine can choose to implement it or not. The only change in behavior for the original EEVEE implementation is that now any shader using a Shader-to-RGB node will not have SSR or SSS on any of their BSDF node. This change mimics the behavior of what EEVEE-next is expected to be. I am still trying to find a way to keep the old behavior but it seems complicated.
When working on supporting the current SSS implementation, I stumbled accross what I can described as a bad choice from past self. Some of you might know that the SSS radius socket default values are used to pre-compute the SSS and that the socket input is only used as a scaling factor. Alas, the scaling factor makes no sense at all as a parameter. What should have been from the beginning is to use the input as a mean radius. This makes more senses and is more compatible with what you would expect from cycles. You would have to tweak the default values for the average SSS coloring and then the input would have just make it closer to what cycles outputs. I did not provide any version patching for now but this can be done easily.
Testing is highly encouraged. Any regressions not stated in the commit message should be reported to the bug tracker.
I will now focus on porting EEVEE-next first bits.
Hi everyone, I would like to announce that a second milestone was reached.
We now have material nodetree support inside EEVEE-Next. There is a placeholder lighting model to be able to check how the BSDFs are mixed. Note that only the forward shading pipeline is effectively implemented.
This was merged for the 3.2 release so we can have some user testing to see if some nodetree breaks. (Edit: The experimental options are disabled in beta and release builds. Testing is to be done with the 3.3 alpha branch.)
Mesh & Curves (Hair) surface types should be supported. Vertex Displacement is also enabled by default and will become an option down the line.
Grease Pencil geometry is done but disabled for the time being until we have a per object option to select which renderer to use.
The Shader To RGBA handling was not straight forward but was effectively dealt with. It should now be supported.
Volume shaders are not yet supported.
BSDF shading makes use of the stochastic sampling of the BSDFs. So the number of BSDF should no longer make the shader linearly more expensive. Temporal accumulation is still not implemented so there can be noise left if you stack many BSDFs with very different properties.
The next step will be to implement the “Film” sub-system which is what allows temporal accumulation and renderpasses.
The Film sub-system is finally in.
We now have a more correct TAA in place for animation playback and viewport navigation which also converges faster when view become static. Large pixel filter (bigger than 2px) support is a bit less than ideal but already a clear improvement in terms of convergence time compared to previous implementation.
There are many ways to improve the TAA (by doing dis-occlusion rejection for example) but I took the decision to leave it as is for now as it is already a good improvement compared to the past implementation.
Many of the render passes are already supported. Adding support for additional ones has been simplified and has less overhead than before.
Cryptomatte is still waiting to be implemented. As I am not familiar with the current implementation, I prefer to focus on more pressing features.
I would like to note that the goal is to have at least feature parity with current EEVEE implementation to avoid delaying the release indefinitely. So I will be delaying some features or improvements that I consider non-essential for an initial release. The two features that are now delayed are viewport up-scaling and camera panoramic projection. The latter has too much interaction with other features that it will take too much time to complete.
I am now focusing on bringing back the motion-blur and depth of field.
Hi, motion-blur and depth of field are now done.
For both, I took the time to port the implementation to compute shaders. This means I had more freedom to leverage modern hardware capabilities, improving performance a bit in both cases. The core algorithms are still the same. This was also a good oportunity to double check the code and implement a few missing bits.
Quality and ease of use have also been improved. I removed some parameters that existed only because of shortcomings of the old implementations.
Motion Blur has 2 new features:
- The shutter curve mapping is now supported. It only distributes the motion steps following the curve. So it will not be visible if the motion steps parameter is set too low.
- Motion-blur is supported in the viewport:
When navigating or editing, it only blurs towards the previous viewport state to smooth out the interaction.
When playing animation, it will use the render settings and will do a preview of the motion blur by extrapolating the deltas with the last drawn frame data.
For the depth of field, input stabilization has been fully reworked and uses TAA internally. This means much less flickering on bokeh highlights. EDIT: I also had to disabled jittered Depth Of Field for the viewport. This is because it is too much unstable and incompatible with TAA which is what the viewport uses. The tool-tip should reflect that once EEVEE-Next replaces EEVEE.
I also started to enable all properties panel that are EEVEE-Next compatible.
Next, I’ll tackle the light and shadow systems.