I’m looking into this, we may add it back as a pass independent of denoising.
If you’re on Windows, the Blender install folder contains a blender_debug_gpu.cmd that you can run to generate debug logs. Generally, more important than crash logs is a .blend file to reproduce the problem. And then if we still can’t reproduce, a crash log can help.
Great!! Today I was using 2.93 at work and I just came in a situation where Random Walk just didn’t work but Christensen-Burley did very easily, and I wished we could have it back in 3.0! Thanks Brecht!
Thanks for taking time to reply Brecht, It’s awesome that you are looking into it! Fingers crossed to see it back in, independent from denoising seems even more fitting to such a useful utility pass. Thank you for all your incredible work!
No clamping can produce more noise in general, not just fireflies. But default value could certainly be increased to work better with the sky texture. How much clamping will darken the image depends on exposure and lights, but value between 30 and 50 should give better results out of the box.
Hey have the bug of everytime i render Blender crashes with AS at a random position → when not when a new Tile starts. https://developer.blender.org/T92158
I have done some tests with Christinsen-Burley in 3.0 and unfortunately it does not work. Christensen-Burley produces so much noise in 3.0 that even with 10 times higher sample rates you still have a lot of noise.
@Brecht:
Are there any ideas or plans to incorporate the capabilities/features of Christensen-Burley into a variant of Random Walk (Burley)?
Is there perhaps a bug in Blender’s version of Random Walk (FR) that prevents proper light scattering and refraction. I appreciate your efforts to reintegrate Christensen-Burley in 3.0, and you have already pointed out possible problems due to the different way Cycles X works, but Blender needs the ability to correctly render translucent and transparent materials that are backlit. (light refraction and light scattering etc…)
If Christensen-Burley cannot be integrated into Cycles X, then SSS in Blender works only with significant limitations, then it urgently needs an alternative.
Maybe you should think about leaving Cycles in 3.0 next to Cycles X.
I think that if you release 3.0 in december or january, then a lot of users will complain about the missing capabilities of SSS in Blender, because that is a major regression.
Hi.
The difference between regular noise and fireflies is that regular noise is easier to decrease by increasing the render samples relatively not very much, which is not always the case with fireflies.
Clamp indirect=30 still produces some noticeable differences in lighting power compared to non-clamping, for example in the BMW scene in headlights of the car. I think clamp indirect=50 would be fine.
I did some SSS testing with Christensen-Burley on an object that looks really bad with Random walk. While it’s true that C-B needs more samples in 3.0, everything also renders much faster in Cycles X.
Here’s 2.93 render at 256spp:
So overall a bit of an improvement . Of course the issue is that the rest of the scene would have to be rendered with more samples as well, which could be a problem. For that, adaptive sampling should help, but I ran into the same problem that I reported before, where dark areas are undersampled. In this case it’s even causing denoising artifacts in the final render (dark spots in the bottom right).
I hope devs calculate noise threshold on the sRGB data and not on the linear colored pixels, because an evenly noise distribution in linear space transforms to undersampled blacks into sRGB. Means you need a gamma-distorted noise threshold to get it neat looking after the linear-srgb transform.
It means that error is measured as
E = (|dR| + |dG| + |dB|) / sqrt(R+G+B)
Here RGB is linear. So only an approximate eye sensitivity curve (sqrt) is used.
I think it’s possible to take the view transform and exposure into account, but it’s the compositor that makes it hard and unpractical. I’m not a dev tho.
I don’t think that’s the way to go. By doing so you are assuming what kind of transform you will use downstream and this kind of assumption should not be hardcoded in the redering algorithms I believe.
There are no assumptions. What you see on screen in Blender is more or less how you want your final result to look like. At least in terms of overall value range. I can’t think of a situation where you would work in sRGB in Blender but switch to linear gamma in post.