I’m rendering an animation, and of course tons of frames, tons of samples whne I realized one thing, in the same way temporal aware denoisers “investigate” previous and later frames to do a consistent denoise, could it be possible to do a “samples aggregation” using this technique so more samples are created based on previous and later frames to increase and reduce noise?
I’m not talking about denoising, this is a latter process, I’m talking about actual samples creation, like when we cn mix two pictures from two different computers and we get an increased amount of sample, something similar but a bit more advance.
Of course it would create samples for some parts and not others, but maybe taking into account 2 or 3 frames around the “target frame” we can get a considerable amount of information to increase the sampling amount and reduce noise.
It’s possible in principle, but it seems unlikely we’d spend time to implement or maintain something like that given the redundancy. If you are going to transfer samples from one frame to another, then it requires solving many of the same problems as denoising, and any artifacts would look pretty similar as well.
mmm ok, I though it could be interesting to interpolate samples from the same objects but different frames to get a higher amount of samples, like when when two frames with different sample amount are fused, but if it’s something more problematic than problem-solving then it makes no sense to think about it.
BTW any reference to the animation denoising command? I need to use it now (I saved exr’s with denoising data)