Blender 4.2 - EEVEE-Next Feedback

Then I should’ve explicitly mentioned that Legacy had the same limitation, and so does any other realtime renderer like ones in game engines (other than letting you change the opacity of an entire light’s shadow but I’m guessing that’s not what you want since Legacy never had that).
Next’s shadows are equivalent to Legacy’s alpha hashed which uses noise simulate transparency (while other modes are doable with node setups), though from my testing Next gives better results (as long as jittered shadows are on).


I am getting a color fringe on the emission object if the DOF is on. Is it expected behavior?


It goes away if I enable Jitter Camera in the DOF settings and render the image


Not sure if that’s expected. Please open a report so we can investigate.

1 Like

Since Eevee NEXT is still a rasterizer, wouldn’t it be possible and efficient to render points of point cloud as camera facing quad sprites with a circle opacity mask and spherical normal transformed from tangent to world space?

This way the point clouds would be very cheap to render (just two triangles per point), would look like perfect spheres, and the sprites could be transformed to face the light for the shadow pass, so the shadows would look right as well. Game engine rasterizers do this and it both looks and performs great.

1 Like

This may be due to my lack of knowledge on EEVEE next. How exactly do you remove the blurriness from the eyes from the below? I would like to make it look like how it does in EEVEE legacy. The plane over the iris has a roughness at 0, transmission at 1. It seems like the farther away from the camera, the more blurry it is. Am I missing anything? I am confused.

Thank you very much for such a prompt response :blush:, but I can’t achieve the desired effect :pensive:.

Youtube link

We need a parameter that removes AO (as it was in the previous version of the engine) and its intensity, but leaves GI (light and reflex). With Max roughness, the entire scene is covered with spots, which are clearly visible on the light interior. The engine performs better under single lamps, but everything related to HDRI performs worse than expected. Night and dark scenes are better, light and bright scenes are worse.

1 Like

Try playing with the thickness parameters in “Fast GI” . The far thickness has a very high default to avoid energy loss but increases overshadowing. Try setting it to the minimum value.

  • For noise inside the AO/GI, increase the number of Fast GI ray.
  • For missing shadowing/occlusion (caused by lowering the thickness), increase the Fast GI steps.

See the manual for more detailed explainations.


This is a bug in the denoiser.

Please report so we can track it down.

However, I am not sure we will have enough time to fix is for 4.2.


I tried. Its behavior does not satisfy architectural visualization. Try to make a simple room scene with light walls and natural light to understand what the real problems are in architectural visualization. Are there any plans to improve the engine towards more accurate work with the joints of walls, ceilings, floors and interior items?

I will send a video, there are examples of reference rendering of cycle compared to eevee. This is just so that you have an example of how things are today, this is the latest build of Windows, version 4.3.0.

Example Youtube link

In general, I want to express my gratitude to you for what you are doing, I think many artists will be grateful to you after the completion of this project for many many years to come :pray:.

It would be cool if in the eevee settings, in the final version of blender, there were presets for architecture, animation, fast rendering and very high-quality rendering!

1 Like

This post was flagged by the community and is temporarily hidden.

This post was flagged by the community and is temporarily hidden.

1 Like

EEVEE-Next is still funamentally a rasterized based render engine with many techniques that it uses being deisgned for real time applications. As such it still needs some manual work to try and avoid issues that come from these “fast rasterized techniques”. For example:

  • Interior scenes can look “flat” if you’re using a HDRi for lighting. Baking light probes can help with that.
  • EEVEE-Next uses a lot of screen space effects. So in some scenes there will be screen space artifacts you might need to design around.

I’m sure there are plans to continue to improve EEVEE. I’m unsure if any are specifically targeted at interior rendering, but I’m fairly confident that interior rendering will improve as various other improvements to indirectly lit rendering are made.

Sorry, it’s not clear to me. Can you explain what you desire for your architectureal visualization? Is it realistic renders? Nice looking renders? A specific stylized look? Something else?

And what’s lacking from EEVEE-Next to achieve this? Or what things are performing poorly, thus making it hard to achieve what you want.


I think what she means (and correct me if I’m wrong) is screen space raytracing doesn’t work for archviz. It’s prone to a series of artifacts that break the illusion of realism like progressive shadow darkening/brightening, illumination from sources outside the frame appearing/disappearing and bounced light appearing/dissapearing. For stills though it works better but for animated sequences it looks fake. It’s a limitation of SSRaytracing


I would like to make a general plea for Eevee Next: can there please be some setting that allows the “rendered” view in 3D viewport be same (as much as possible) as the actual render?

If the general plea isn’t clear: I notice that turning on camera jitter for DOF does nothing in the 3D viewport. But it does do camera jitter when I render the image.

I found a bug report where someone asked about this, and the bug was closed (WONTFIX?) because it’s on purpose that the render view is different than the render (?!).

This breaks my workflow. Please let me see in the 3D viewport a preview of what the render will look like. Non-jittered and jittered are very different, and I compose camera angle/position differently between them.


Yes, we are talking about flickering during camera movement or animation. I know that there are these problems even in Unreal Engine, but there they are minimal. So it seems to me there is something to strive for, I think you all perfectly and yourself understand. if there was a setting (slider), which would minimize the impact of AO, it seems to me that this is already to date, was enough for release (this applies to light interiors, which are so popular among clients in the architects).

Unreal, specific software, not for people who want ease and quick results.

1 Like

Yes, that’s what I’m writing about here.

Quick question about Eevee Next updates: Should we expect all NEXT feature improvement updates to land only for version 4.3 and only bug fixes for 4.2? Or will feature improvements updates eventually make their way to 4.2? Should we be testing Eevee Next with 4.3 now to give better feedback? I’m assuming that the update cycle for 4.2 LTS won’t be on the level of 4.3 Alpha/Beta.

Blender 4.2 LTS may only receive bug fixes. Any new feature will land in Blender 4.3 or future version depending on when the feature is finished. At this moment there is no difference between 4.2 and 4.3 as we are all in bug fixing mode.

When new features appear it would be better to already provide feedback during development of the feature. Someone with some knowledge of the development process can then download and provide feedback even before it lands.

Why do I say “knowledge of the development process”? Features don’t appear immediate, the engineering part of it takes time. Often the initial implementation doesn’t compare to the final product. But is required to move from the current code-base to the final feature. A good understanding of this allows constructive feedback and reduce the amount of time developers are distracted and need to explain this.

I also believe that such a person would be great to have as part of the Viewport/EEVEE module and help out on a regular basis. I also believe that there are good candidates out there that could help out. A challenge might be that he/she needs to know/understand/learn multiple ways how Blender is used, and that there are multiple options to evaluate.

People can always reach out to the module if they want to be involved more directly via blender chat.


Hi, I don’t know if this is a bug or a known issue or expected behavior, but the Thickness input in Material Output is 0 by default, but if I input 0 to it using the value node, I get different results.
should I report this?

If no value is plugged into the output node, a default thickness based on the smallest dimension of the object is computed. If a value is connected it will be used as object space thickness (i.e. scaled by object transform). A value of zero will disable the thickness approximation and treat the object as having only one interface.