As i recall correctly, work on Lumen were done by 4 people and took 2 years until was made public year ago, I didn’t stumped across numbers for Nanite but i guess they are similar or higher, few people working just on that one feature for multiple years. And even those are not even done yet, and those people still work on them.
EEVEE have only one main developer so that single feature eg. Lumen would take ~12 years with no other Flareon features.
And for money part, BF can’t just hire multiple people on single eevee feature while others like UV have ~zero.
Anyway Eevee have tons of potential features that as i guess Clément is aware them, but its just not possible to implement all of them due to lack of workforce.
Adapting the new codegen to the old EEVEE codebase was more work that I originaly anticipated.
A core design shift was to make the codegen render engine agnostic. Meaning there should be no special behavior depending on the render engine that runs it. At first glance it does not look like a major issue for Blender because EEVEE is the only engine to use the codegen. But this became important with the recent decision to keep EEVEE-next and the original EEVEE implementation side by side. Also this makes other engine implementation easier.
Moreover, this delegate the geometry support to the engine itself, making the support for other geometry types easier.
Technically, the new codegen now only produces functions strings that are then used by the render engine however it wants. The strings are now shader stage agnostic, meaning they can be used inside vertex shaders (i.e: true procedural displacement support for EEVEE-next).
Porting to the old codebase was also a test for that and allowed to polish the design, furthering the separation between the engine and the codegen.
I cleaned up some of the most tricky stuff we were doing to support displacement as bump mapping. The choice between fine bump or fast bump (2.80 blocky style) is now the responsibility of the engine and may become a performance option for EEVEE-next. This is also important for upscalling which I am aiming to support in EEVEE-next.
The Shader-to-RGB node is also now supported and engine agnostic. The engine can choose to implement it or not. The only change in behavior for the original EEVEE implementation is that now any shader using a Shader-to-RGB node will not have SSR or SSS on any of their BSDF node. This change mimics the behavior of what EEVEE-next is expected to be. I am still trying to find a way to keep the old behavior but it seems complicated.
When working on supporting the current SSS implementation, I stumbled accross what I can described as a bad choice from past self. Some of you might know that the SSS radius socket default values are used to pre-compute the SSS and that the socket input is only used as a scaling factor. Alas, the scaling factor makes no sense at all as a parameter. What should have been from the beginning is to use the input as a mean radius. This makes more senses and is more compatible with what you would expect from cycles. You would have to tweak the default values for the average SSS coloring and then the input would have just make it closer to what cycles outputs. I did not provide any version patching for now but this can be done easily.
Testing is highly encouraged. Any regressions not stated in the commit message should be reported to the bug tracker.
I will now focus on porting EEVEE-next first bits.
Hi everyone, I would like to announce that a second milestone was reached.
We now have material nodetree support inside EEVEE-Next. There is a placeholder lighting model to be able to check how the BSDFs are mixed. Note that only the forward shading pipeline is effectively implemented.
This was merged for the 3.2 release so we can have some user testing to see if some nodetree breaks. (Edit: The experimental options are disabled in beta and release builds. Testing is to be done with the 3.3 alpha branch.)
Mesh & Curves (Hair) surface types should be supported. Vertex Displacement is also enabled by default and will become an option down the line.
Grease Pencil geometry is done but disabled for the time being until we have a per object option to select which renderer to use.
The Shader To RGBA handling was not straight forward but was effectively dealt with. It should now be supported.
Volume shaders are not yet supported.
BSDF shading makes use of the stochastic sampling of the BSDFs. So the number of BSDF should no longer make the shader linearly more expensive. Temporal accumulation is still not implemented so there can be noise left if you stack many BSDFs with very different properties.
The next step will be to implement the “Film” sub-system which is what allows temporal accumulation and renderpasses.
We now have a more correct TAA in place for animation playback and viewport navigation which also converges faster when view become static. Large pixel filter (bigger than 2px) support is a bit less than ideal but already a clear improvement in terms of convergence time compared to previous implementation.
There are many ways to improve the TAA (by doing dis-occlusion rejection for example) but I took the decision to leave it as is for now as it is already a good improvement compared to the past implementation.
Many of the render passes are already supported. Adding support for additional ones has been simplified and has less overhead than before.
Cryptomatte is still waiting to be implemented. As I am not familiar with the current implementation, I prefer to focus on more pressing features.
I would like to note that the goal is to have at least feature parity with current EEVEE implementation to avoid delaying the release indefinitely. So I will be delaying some features or improvements that I consider non-essential for an initial release. The two features that are now delayed are viewport up-scaling and camera panoramic projection. The latter has too much interaction with other features that it will take too much time to complete.
I am now focusing on bringing back the motion-blur and depth of field.
For both, I took the time to port the implementation to compute shaders. This means I had more freedom to leverage modern hardware capabilities, improving performance a bit in both cases. The core algorithms are still the same. This was also a good oportunity to double check the code and implement a few missing bits.
Quality and ease of use have also been improved. I removed some parameters that existed only because of shortcomings of the old implementations.
Motion Blur has 2 new features:
The shutter curve mapping is now supported. It only distributes the motion steps following the curve. So it will not be visible if the motion steps parameter is set too low.
Motion-blur is supported in the viewport:
When navigating or editing, it only blurs towards the previous viewport state to smooth out the interaction.
When playing animation, it will use the render settings and will do a preview of the motion blur by extrapolating the deltas with the last drawn frame data.
For the depth of field, input stabilization has been fully reworked and uses TAA internally. This means much less flickering on bokeh highlights. EDIT: I also had to disabled jittered Depth Of Field for the viewport. This is because it is too much unstable and incompatible with TAA which is what the viewport uses. The tool-tip should reflect that once EEVEE-Next replaces EEVEE.
I also started to enable all properties panel that are EEVEE-Next compatible.
The new shadows (maps) have been merged!
Note that this isn’t the whole new shadow implementation. Other shadow related features will come later.
For instance current implementation is lacking soft shadows.
The benefits of the new implementation:
Fixed shadow budget (user controlled)
High number of shadows maps (4096 visible instead of less than 1024 per scene) without per light type restriction (there can be 100 sun in the scene now)
Cached directional shadow (less costly when navigating)
Improved scalability (simplify option)
Optimized shadow map density (precision is put only where it is needed)
Really high precision / sharp shadows (up to 8K shadows per cube-face projection)
Less peter-panning artifacts due to shadow bias
No more self shadowing artifacts (but aliasing is still present)
Less quality settings to tweak (no more cascades control, bias, high bit-depth) because quality is adaptive
Downsides:
Needs big shadow pools allocation upfront that cannot be shared between viewports.
The rendering of the shadow-maps is more expensive.
Sampling them is a bit more expensive.
There is 3 ways to hit memory limitations:
If the shadow pool is bigger than what the GPU can support, then performance could degrade drastically as it will start to swap the textures to CPU memory or even to disk. The application could also run out of memory and quit unexpectedly.
If there is many visible shadow enabled lights in the current view, then it is possible to run out of shadow mapping tiles. This results in missing shadow tiles and an error message will be displayed in the viewport.
Too many shadowed lights might result in completely missing shadows on some lights.
This implementation of virtual shadow mapping was not as straight forward to make and required a lot of refactors of the draw manager to be made efficiently. The implementation itself was rewritten twice and the project grew in complexity. Along with that came other projects that needed my attentions. Future updates will be more regular.
There is still discussions about whether or not we will add some more parameters:
Maximum resolution: Would allow adding a higher bound to the quality of sun shadows. This could be split between volume and opaque options as volumes might ask way too many shadow pages.
Normal Bias: The new shadows being way sharper than previous implementation, it makes the shadow terminal problem much more noticeable. This bias would be a way to avoid this artifact. We cannot provide a per object solution like cycles does, so we have to make it either a per light option or a global per scene / render option. The later is preferred for simplicity of use.
Edit: There seems to be a driver issue affecting AMD GPUs which makes shadows not work at all. This has been reported and is being worked on by AMD.