This entire evaluation of noise should be moved to after view transform, otherwise you are hardcoding a certain open domain (0 to infinity, pre-view-transform) value to be “bright” or “overexposed”. Think of how problematic it would be if, say, we support HDR displays in Blender’s OCIO config in the future (Where the light sabers are allowed to stay colorful instead of attenuating to solid white). Basically what you assume to be “nearly clipped” will be shown as clear as day in that image formed for HDR display.
Ideally anything noise-related including noise threshold evaluation, denoising etc., should be moved to after view transform, the evaluation of “what is noise” is completely determined by the view transform (I.E. the image formation). I have come to the conclusion that the attempt to measure noise level pre-view transform will be extremely ineffective.
I mean just look at ACES vs TCAMv2 and see how the same open domain Cycles outputed data can have different noise level depending on which view transform you use:
ACES:
I don’t mean to talk about color shifts but look at the noise appeared in the purple region.
TCAMv2:
Look at how different the noise level is.
“What is noise” is influenced greatly not only by the dynamic range of the view transform, but also how the view transform attenuates the chroma. Therefore, the evaluation of noise should be after view transform (I.E after the image formation).
Like you said, the parts of the image that has been “overexposed”, that are attenuated to solid white, doesn’t actually need that many samples. But if you think of the situation where you have HDR display support, you will start to see how a fixed assumption of a certain open domain vlaue start to fall apart. Or in a situation like above, where the amount of noise differs depending on the view transform, the assumption is also doomed.
Ideally we need to have the algorithm looking at the post-view-transform image to evaluate noise level, instead of arbitrarily down-sample above a certain open domain intensity.
So, we should remove this broken assumption:
Whether open domain R+G+B > 3 (like open domain RGB [1.5, 1.5, 0.5]) is overexposed, depends on the view transform, “noise” is only seen by users after view transform. So this assumption is deadly wrong.
Instead, move noise evaluation to post-image-formation, then the issue you have with “oversampling the solid white” should be resolved.