Blender 4.2 - EEVEE-Next Feedback

It is likely that the VRAM usage is caused by the shadow map pool and the raytracing buffers. The memory usage should scale the same was as before with the amount of objects in your scene.

You can lower the size of the shadow map pool in Render Setting > Performance > Memory, but be aware that this might cause artifacts if you have a lot of high resolution shadows.

You can also lower the memory usage of ray-tracing by reducing its quality in Render Setting > Raytracing > Resolution.

2 Likes

The raytracing pipeline is not really meant for high performance application, let alone VR. That’s exactly why the option to disable it exists.

The worst are reflections. The reflection itself flickers (even without temporal reprojection) and it’s grainy.

That’s a tradeoff we had to make for correctness and GI. We are still looking into making them more temporally stable.

There is no denoising

I am not sure why this is the case. That sounds like a bug. The anti-aliasing should work fine if you use Temporal Reprojection.

No indirect/HDR lighting. Not even with light probes.

The reason for this might be that the indirect light is clamped to 10 by default. See the Render Settings > Clamping > Surface > Indirect Light. We chose this default to reduce the amount of fireflies in the ray-tracing pipeline. But if you are not using the ray-tracing pipeline, there is no reason to use this limit.

1 Like

So I tried the latest one and the light intensity matches cycles pretty well, great.

However I still can’t figure out why is the light angle manual. I mean you must be already extracting some feature from the HDRI map to define the direction of the virtual extracted light. There must be something you determine the center point of to calculate direction of light. And that something therefore must have some area which could be used to derive the virtual extracted light angle (disk size), couldn’t it?

Thank you so much! It did lower the VRAM usage.

Yes, I have it. But the issue is that it produces unwanted results. For an HDRI with two opposing light sources it creates one in the middle with both light areas combined. In this case you almost always want to reduce the effective size of the extracted light otherwise the specular looks very wrong. If the extracted sun is smaller it is less distracting on shiny surfaces.

The other issue I ran into is that the areas needs to weight by intensity otherwise you will make the sun bigger than it is. But even with that, I got very bad results in courtyard.exr and night.exr (two HRDIs we ship blender with), where the resulting light would either not match the shape of the light reflector (courtyard.exr the wall is is the main source of light and it is not a disk, result was very distracting) or the light sources are too scattered around and it makes a huge blob in the middle (night.exr). Unfortunately I didn’t save images of these results.

1 Like

I might be mistaken, but the Ambient Occlusion effect is now only part of raytracing pipeline, right?
In legacy Eevee to run in VR Id turn off screenspace reflections and though noisy, the AO is there and its all relatively usable.
In Eevee Next, since AO is not separate option, I get either raytracing+AO or nothing, right?
Could there be some more lightweight effects (AO at least) pipeline to run at (at least) 30fps for common scene - *temporaly stable?

AO specifically exists as a render pass that you can enable and use in the compositor if you need basic AO for artistic effect.
image

So I guess you just want to wait for the compositor to support render passes in the viewport, which I assume is still far away sadly.

2 Likes

There’s the AO node for the materials, that one works… one just needs to add it to all the materials, no!?

1 Like

We might reintroduce the option. But that need more refactoring and design as this needs to work with the new BSDF layering system, UI, etc. So that might be for a future release.

2 Likes

Yes, this seems to be my first takeaway as well. However, this reduces the quality to below standard Eevee, see my first versus fourth photos.

Okay, that is good to know.

Thanks, yes enabling temporal reprojection without raytracing works, the image is normally antialiased.

Increasing clamping above 10 has an effect, but very faint. What I meant is, my light probe (volume) does not seem to work without raytracing. It doesn’t do anything, even if I click on the “Bake Light Cache” in the Data Properties tab.

Also: there doesn’t seem to be any AO without raytracing. This really subtracts from the real-time quality, to below that of Eevee legacy.

I just tried and it works fine here. Please file a bug report then. It might be that there is some versionning going wrong or the scene setup creates a bug in the Volume probe baking.

Thanks for reporting, we are aware of the situation and will find a way to expose this option back. But as I outlined in a previous reply, this is unlikely to land in 4.2.

I tried out the May 16 build and did a small test to compare the engines.
For lighting, it’s just an HDRI and an orange area lamp on the left, with a black plane on the bottom and a sphere probe around the character.

Eevee Next seems to be ‘aggressive’ with its raytracing - that orange glow beneath the forearms isn’t there in Cycles, and reflections from the emissive dots seem too bright.
The whole image, but especially the chest feathers, look weirdly blurry as well. Zooming in inside Blender made it even more blurry.

Mind you, I’m a casual user compared to anyone here - just wanted to chip in my experience so far.

2 Likes

Right, I forgot the whole reason I wanted to post here.

This was the previous test:

Here, the character uses a more complicated and detailed material (with a few 8k textures), and Eevee Next freaks out - it’s visibly noisy, like it can’t figure out what it’s meant to display, and makes the material not react to light properly as a result.
Which is weird considering the Eevee Next project page mentions improvements to shaders as a whole.

1 Like

EEVEE Subsurface Scattering manages to blend different objects by washing out the colors at the edges. Not sure if this is physically incorrect or not, but it is a very welcome result for stylized shading.

EEVEE Next without Raytracing does a horrible job. Raytracing enabled is much better, but still not without artifacts.

No comparison with Cycles, though that also won’t look good. The hope is to achieve with EEVEE Next what EEVEE Legacy can.

4 Likes

Hi. I was testing eevee next with a scene that has multiple instances of planes with alpha, the planes have a shader that combines the principled shader (with sss activated) with a translucency node (with a mix node to maintain energy conservation). I also made a render in cycles for reference. Eevee next is similar to the reference, a little darker and louder but it does it very well, amazing!! you do a magnificent job.

Cycles reference:

Eevee Next (EEVEE):

EEVEE Legacy

Related to this, I had a question, why does the render seem to have a kind of filter that maintains the noise despite using high samples? In the test above I used 32 samples and then I tried using many more but the image did not clean up. Increasing the samples is also a problem I have because if you look at the previous image you can see the render time, eevee next almost doubles the time with the same samples as legacy and perhaps increasing that is exponential to the render time.

HQ files because the above were converted to jpg:
test.zip.txt (9.1 MB)

3 Likes

This is it. Starting a new empty scene light probes work, but not in my production scene. It simply doesn’t even start rendering the probes hence why I thought they simply don’t work.
I played around with the scene a bit. After deleting a lot of objects, it starts rendering the probes, but never finishes but crashes (consistently). I’ve narrowed it down to having large objects in the scene. Like when the light probe volume is a standard 2x2m cube, a simple ground plane with let’s say 200x200m slows rendering to a crawl, a size of 400x400m makes Blender crash.

EDIT: filed a bug report here: https://projects.blender.org/blender/blender/issues/121916

1 Like

The thing is that this is a problem already now with the position. It creates just one source in the middle. If the direction of the light source is already compromised, having compromised area is not that big of a deal IMO. In fact I’d argue having correct position of extracted light is even more important than having correct area. It’s weird if you have HDRI of two opposing windows and the light source illuminates your scene from a shadowy corner of the room half way between the two windows.

Maybe better solution would be to not have extracted light angle constant, but have it be multiplier of automatically calculated area. This way, if the automatic area calculation goes out of whack, the user can still correct it, but if I am browsing multiple HDRIs during lookdev, and most of them are the typical sunny day with one bright sun disk kind of thing, I don’t have to be constantly reaching for some parameter to tweak.

2 Likes

Hi @fclem,

Your incredible coding skills and Blender dedication are greatly appreciated!

Only recently discovered Volumes have been implemented, and now only work in Viewport Shading - Rendered.

Is Depth of Field - ‘Jitter Camera’ planned for a future Blender version, after 4.2?

Thank you simplifying the interface, thoughts on further simplifying Denoising options (as previously mentioned)?

Kind regards

1 Like

why call it first release if the featureset is lacking in every departement ? is the roadmap that important?

1 Like

Don’t think that’s either constructive or useful in any way.
I have purchased first releases of software and every single one of them was incomplete taking into acount the many features that were added later (last one was Procreate Dreams). Software is like an organism in a way, the obvious analogy would be, why giving birth if the human is lacking in every department?

3 Likes