Cycles feedback

@silex
AFAIK is scene dependant, I can share the scene with you privately if you want and you can repeat the test.

Also my OS was Linux, what do you have? Linux or Windows?

So far two CPU’s have been confirmed TR2990WX and Ryzen 5950X, this second one is the same or a more modern one version of yours, right?

I’m on Linux, 3970X - that’s a zen3 Threadripper 32 core. I can test the scene for you, no problem. Did you tried to simplify it?

Yes, it’s a simple one, it’s just that it uses commercial assets that I can’t share publicly, but I can send it to you privately, just don´t share it hehe

EDIT: you have a PM with the link to download the file, please post the results, I’m very curious to see if yout threads are all at 100% or if they suffer like mine

Hi, just came across this paper about dealing with rendering of massive scenes, not sure how well known is to the rendering community, but looks really impressive and it was done with an older version of cycles for sure, would be amazing seeing this implemented in cycles-x.
https://dl.acm.org/doi/pdf/10.1145/3447807

1 Like

Well, if memory serves me well, Milan Jaros is already a Cycles developer, involved in early implementation of Adaptive sampling.

Nice, didnt knew that, I wonder if this paper is being implemented somehow, I mean being able to split the render between the memory of multiple cards.

I have a suggestion to improve texture detail in denoised images.
Denoising albedo pass shows objects that are visible in reflections or refractions which is important to properly denoise those objects. However, once the roughness value for the material goes over ~0.27 it abruptly switches to a simple white overlay with some Fresnel falloff. And that overlay can completely cover texture details if the surface is perpendicular to the camera.
In this example you can clearly see how the floor texture gets turned into a pixel soup, especially towards the back wall.

Which is not surprising if you look at the albedo pass:

But if I plug blender’s DiffCol pass into the denoising node, the texture looks much sharper

Of course the drawback of using DiffCol pass is that it doesn’t have any reflection or refraction information.
So maybe that white overlay strength in albedo pass could be reduced somehow to better preserve texture details? I don’t think that OIDN requires albedo pass to exactly like it does now, because you can give it completely different pass and get even better results.

I should mention that the tests were done at 64spp, which is quite low, but this is a simple scene. Also good denoising is most important with noisy images.

5 Likes

This was introduced because if you allow “proper” denoising data to be generated for rough surfaces, you end up with a lot of noise in the denoising passes. Apparently this was bad for NLM denoising and so apparently the value of 0.274 was picked as a cut off where proper denoising data is to be no longer generated to help resolve issues with NLM and 0.274 was just a “good value” from empirically testing.

With OIDN supporting pre-filtering, the cut off of roughness 0.274 can either be increased or removed entirely as pre-filtering will remove a lot of that noise. But proper investigations will have to be made and that decision will be left to the Cycles development team.

If you compile Blender yourself and want to know what file to modify to change the cut off, check here: /intern/cycles/kernel/film/passes.h. The relevant section is:

    if (bsdf_get_specular_roughness_squared(sc) > sqr(0.075f)) {
      diffuse_albedo += closure_albedo;
      sum_nonspecular_weight += sc->sample_weight;
    }
    else {
      specular_albedo += closure_albedo;
    }

spr(0.075f) is the value that defines the cut off. But there’s some extra math between this point in the code and the number you type into the roughness section of the shader which means the number seen (0.075) doesn’t line up with the roughness number that is the cut off (0.274).

4 Likes

Are there known issues with shadow catcher(s) behind transparent objects or transparent objects as shadow catchers? I’ve been getting some crashes, but properly creating a bug report would require me to try to reproduce them in a new scene.

Interesting. In that case it’s definitely worth reevaluating the guiding passes, since NLM is no longer around. Tracing reflections above roughness 0.3 is probably still not useful as the reflection becomes too blurry to contribute any significant details to the appearance of the shader, and the diffuse texture becomes more important, it’s just that this white overlay is too strong. Also a sharp cut-off can create some unwanted patterns if you have a roughness texture with values around this 0.274 boundary. The issue is most apparent in viewport rendering with Optix denoiser.

Here white splotches in the albedo pass come entirely from the roughness map (areas where values go above 0.274) as there is no diffuse texture. But denoiser can’t make that distinction and creates unwanted detail and noise on the surface. Also there’s some angle based cut-off, so there’s a lot going on, that can confuse ai denoisers. Maybe gradually fading out raytraced reflection instead of an instant cut-off could mitigate the problem somewhat.
As you said some proper testing should be done with various scenes. Unfortunately building Blender and editing C code is way above my pay grade.

2 Likes

A well layout set of documentation for building Blender can be found here: Building Blender - Blender Developer Wiki

And I believe the change you need to make to remove the cut off entirely is to remove

if (bsdf_get_specular_roughness_squared(sc) > sqr(0.075f)) {
  diffuse_albedo += closure_albedo;
  sum_nonspecular_weight += sc->sample_weight;
}

From /intern/cycles/kernel/film/passes.h

1 Like

Similar to this made in the past, but the reporter didn’t provide much useful information. You can report the bug and we can look into it.

If you can’t figure out how to recreate the bug in a new scene, then you should take the current file you have, remove as much from it as possible so you’re left with only the objects, render settings, and materials required to reproduce the crash, and share that in your bug report along with steps on how to reproduce the crash in the given file (In your case it sounds like the steps are to open the file and render the scene. So share those steps.).

1 Like

Thanks, I’ve created a report (T92868). Just had to set aside some time to create a simple scene because the one I was working on was too large and I had to remove the transparent objects to get it to render. I realized the bug actually involves both things I mentioned.

Agree with that its verry slow.

With command line rendering it is much worse. To stop a render you have to press the “Cancel Render” key combination and then wait up to 30 seconds for the render to cancel. You can force the render to cancel by forcing Blender to close, but with the Placeholder option enabled in rendering an animation, this leaves behind a blank frame.

I have not read this paper but I will certainly take a look. We are currently trying to improve the memory footprint to help render large scenes.

1 Like

Is this something that should be reported as a bug? If the hard coded values are not correct for most cases. It might be useful to have this as an advanced setting using the current default. Feel free to create a bug for this if that is the case.

1 Like

There is a bug report for something similar to this described here: ⚓ T85512 Denoise artifacts in Albdeo and Normal pass caused by material roughness

I’m currently in the process of rendering a bigger project (first time using the new Cycles in a proiduction, the new shadow catcher is a life saver here) and I noticed that while the performance is great and usually much better than before, there is something I don’t quite get:

Why is “hybrid” rendering (CPU + GPU, or XPU as some marketing guys call it at Pixar / SideFX :wink: ) slower than pure Optix rendering, even with an extremely powerful CPU like a Threadripper with 32 cores?! Is this being investigated or is the consensus “just use the GPU and that’s it”?
It’s kind of frustrating to see a CPU sit there doing nothing while the GPU does all the heavy lifting…

In my case it looks like this.
GPU only:
image

GPU+CPU:
image

Same file, same frame, same computer. Quite strange is the memory consumption which is a lot lower on CPU+GPU than on GPU only.

Don’t get me wrong, I don’t want to “complain” at all, I’d only like to know if there is something planned to improve this situation. I din’t find anything special about it on https://developer.blender.org/

4 Likes

Is anyone getting this error with Optix denoise on Linux, Blender 3.1.0, and nvidia 495.44?
https://developer.blender.org/T92985