Would a variable noise threshold based on material colour further speed up adaptive sampling?

Would adaptive sampling be faster if the noise threshold adaptive sampling uses could be based on perceptual noise. For example, I just rendered a dark blue and white tent. Inside the tent where lighting is mostly indirect, the dark blue areas, although most likely as noisy as the white area, looked perfectly acceptable, whereas the white area still looked very noisy.

Maybe it would be beneficial to have the adaptive sampling bare this in mind, having a variable noise threshold based on material colour? Maybe even have it work on a per pass basis rather than on the combined…sort of like an automatic branched path tracing, throwing sample only at passes that need it.

We can investigate ways to better detect what is noise and what isn’t, but material color is not the right measure.

Great news, thanks. Thought there might be a correlation between the noise being perceptively stronger on white materials when compared to non-white materials under identical lighting conditions.

When you discuss these things with fellow developers, is it in a public forum? I’d be really interested to follow along if possible.

Would it be possible to factor in the denoising albedo pass at least (when it comes to telling the adaptive sampler when to stop sampling)?

By now, it seems to be a decent estimate as to the final brightness values of the material after all of the shading nodes are accounted for (assuming the lighting is completely flat with no shadows). The only possible sticking point is that the algorithm for whether or not a part of a surface is specular in nature is binary with no blending, but that might be fixable.

The adaptive sampler itself meanwhile is really a nice rendering boost and avoids the pitfalls seen with previous attempts at a patch.

The material color is not a good estimate of the final color, and brighter colors do not need more samples in general. If you add such assumptions it may fix one case but will break another.

Basically all Cycles developer discussion happens in the developer.blender.org Cycles project and blender.chat in #cycles-coders.

1 Like

I don’t expect that to do much, if anything at all. Those passes are useful for the denoiser because the denoiser looks at finished pixels and their neighborhood. The adaptive sampler on the other hand has deeper knowledge and looks at the variance of the samples that make up a single pixel.

The adaptive sampler not only helps with noise but also anti-aliasing. If you have sharp features in a texture, the adaptive sampler will pick up the edges and add more samples to resolve them, while leaving the flat areas with fewer samples. If it were to divide out the albedo beforehand, it would be unable to do that.

1 Like

does the adaptive sampler look at the variance of samples on a per pass or just from the combined pass?

Only the combined pass for now. In an ideal world, we’d be able to allow the adaptive threshold to be driven by an entirely user defined input, since we can’t anticipate every possible use case or what people tweak in the compositor.


cool thanks. Do you think it would be beneficial to look at the variance of samples per pass, and then concentrate samples primarily on just the noisiest ray types until the noise levels even out. I’m thinking this would act like sort of a dynamic noise detection driven branched path tracer for even more efficient sampling.

Seems like a really interesting approach. Some types of materials resolve very quickly while others could take orders of magnitude more samples to get to the same variance (such as SSS) . I see this being a great (albeit considerably more complex) addition to the adaptive sampling.

Really I think the first thing to do is do more test renders, see where it works poorly and try to improve the noise estimation. Only if that is done should more complexity be considered.

As I understand it, the suggestion is to concentrate rays on specific components of one material. Not to better balance different materials, because that should already be happening.

That may be the case. If you have a material which is a 50:50 mix of SSS and diffuse, and you throw 100 samples at it, you’ll end up with 50 of those samples going to the diffuse BSDF (if I understand things correctly), which is likely to converge very quickly. If instead you could weight the samples for each BSDF (though this technically seems quite complex and might not really fit into adaptive sampling, but more of an automatic BPT) you’d end up with much more even variance between different types of materials.

The suggestion is two part:

a) variable noise threshold based on perceptual noise (generally noise is more noticeable on materials with a bright base colour compared to an identical material with a dark base colour under the same low lighting conditions, even though I suspect the renderer see’s them as having the same noise level). I Haven’t fully tested the theory, but I suspect this should be limited to certain ray types, namely diffuse direct/indirect.

b) concentrate samples on the noisiest passes, ie if the indirect diffuse has a noise level of .8 and the direct has a noise level of .2, then the direct pass would get 20% of the samples, and the indirect 80%). An automatic branched path tracing technique so only the squeaky cog gets the oil.

In practice the BSSRDF is already receiving more samples and certainly using more of the render time than a BSDF. There can be some advantage to this, just saying the speedup might not be as significant as you might expect compared to the additional overhead/complexity.

1 Like

Okay, certainly the complexity overhead would need to be considered. I wasn’t aware that in regular PT more samples would go towards BSSRDFs which inherently take longer to resolve. If that’s already the case, then I agree, there would be little benefit to this.

As explained above, I think it’s worth testing our current noise estimate empirically and seeing how well they work on real world scenes. Maybe that reveals brighter materials always need more samples, but it’s important to have actual data to back up such decisions.

1 Like