Taking a look at OptiX 7.3 temporal denoising for Cycles

Hi @YAFU,

The denoising sub-discussion started to lead its own life in the general Cycles Requests thread, causing a bit too much noise there (pun intended :wink:). I think it’s more convenient for the Cycles devs to read the denoising posts in this separate thread than as a sub-discussion being tucked away into the many pages of the general Cycles Requests thread.

2 Likes

Thanks for the heads up. I keep forgetting a lot of the demo scenes have compositing disabled.

Okay, it turns out tests are taking longer than expected. I hope to have results soon… but I created a bunch of little off-shoot tests as I went (and I now have 100GB of EXRs from 20 different denoising tests)
If you’re wondering what the tests are, they are as follows:

  1. Static noise
  2. Static noise with the camera rotated
  3. Animated noise
  4. Animated noise with the camera rotated

With each of these I rendered the scene with:

  1. OIDN (Colour+Albedo+Normal)
  2. OptiX 7.2 built into Blender (Colour+Albedo+Normal)
  3. OptiX 7.3 temporal denoising
  4. OptiX 7.3 temporal denoising but the flow pass is black (to see what happens without flow)
  5. OptiX 7.3 temporal denoising but the flow pass is just a transparent image (just in case black means something in flow data and OptiX can see transparency?)

I probably won’t share the video for many of these tests, they’re primarily there for my own observation. But I’ll share results soon (hopefully…)

I’ve just started rendering the 300 sample version of the scene. Will post information (and a video?) about when it’s finished.

Sorry, my video editing software keeps crashing and it’s getting really late. I’m unable to share a video now, so I’ll just write up a quick few notes:

  1. Temporally denoising the animation with black or transparent flow data didn’t seem to have much of an impact on the temporal denoising. It may be more apparent in different scenes with different sample counts and amounts of motion.
  2. My test with the rotated camera didn’t show any noticable changes. I didn’t expect anything noticeable to happen with it but I just wanted to test to make sure.
  3. As you would guess, OIDN and standard OptiX are not temporally stable at 100 samples at 1920x1080 in this classroom scene. OptiX temporal denoising helps, but there’s still areas of temporal instability. Hopefully the 300 sample render will help?

No problem. There is no rush.
Thanks for your tests and notes!

The future looks to more adaptive sampling based at least as far as cycles-x is concerned. Maybe it would be good to try adaptive sampling and investigate at which threshold level the results from New Optix meets/exceeds that of the Old version.

E.g. you might find that Old needs 500 samples with a 0.002 threshold to get a decent result even though the denoiser is technically not stable. And New may only need 500 samples with a 0.005 threshold to be perceptibly the same but with a 15% render time reduction. Or something along those lines.

Alternatively try bumping those sample counts up to 10000 and let the threshold fully dictate the stopping condition etc. That way you can give a more accessible blanket statement at the end and it wouldn’t be tied to the scene as much: i.e. To use the AI temporal denoisers, you need to let things converge to the 0.005 threshold level by using a sufficiently high sample count regardless of scene.

Sorry, it took so long, but I’ve now uploaded a video demonstrating OptiX temporal denoising in the classroom scene at 100 and 300 samples.

It can be found in the link below. It’s a 1920x1080 render, however I have upscaled it to 3840x2160 in my video editor so it can be watch in “4k” in YouTube which has a higher bandwidth and should thus reduce compression artifacts. 4k will probably not be available straight away, so you may need to wait a bit for that to be processed.

6 Likes

Good idea to go back and forth repeatedly in the video to better appreciate it!
Where the difference between standard and temporal is most noticeable is in the edges of the shadows on the desks and on the floor.
I think that for production with render at high samples rates, optix temporal denoising is doing an acceptable job.

We still have to be sure if we are doing things right. In the little demo on the nvidia site that example looks much more extreme than mine at 50 samples but with even better results. But this could be only under some special condition for advertising purposes.

But for now the results we have obtained I think are acceptable.

Another thing I noticed looking back over the un-compressed version of the video was that the ceiling with temporal denoising is much better. With standard OptiX we get the “moving blotches” effect and with temporal OptiX it’s close to stable (Even at 100 samples).
Technically yes, this part of the scene is easy to denoise. It’s a low contrast area with very little dis-occlusion. But it’s just something I noticed.

Link for reference: https://lh3.googleusercontent.com/71PXAbmvgoLVBMeoaQdhiu1uqdaDw5YkMCeaE4GFjGMg0vX2nSdmxtUF6qJK4vjqqm4QB8aSNskBNTujNWga_XnJH5HyBJXt4SEeo00lkCH4tgkfMmZ8IgnGCnKctQfZt1rH2ZC5
Looking back over the OptiX example scene shown on their website, it’s hard to make definitive calls on anything because the resolution is so low, but it still looks like that even with temporal denoising there are still a bunch of “moving blotches” (primarily around areas of detail). Everything else looks mostly fine, but that’s probably down to two simple factors.

  1. It’s low resolution.
  2. The areas that are fine are basically just flat colours. E.G.Solid grey wall. This is similar to the ceiling in the classroom.

I think we may be doing it right? Maybe not? I don’t have the technical knowledge to say anything with any certainty.

Maybe OptiX temporal denoising isn’t as good as I expected it to be. It’s just a little tool to do the final clean up on animations once its rendered to a almost perfect state. Either way, seeing this integrated into Blender would be nice, and if not, I’m sure a small GUI app can easily be made to make OptiX temporal denoising more user friendly.

I might continue experimentation? Try simple scenes with small amounts of detail? Try scenes with lots of detail? Try different sample counts? etc.
We’ll see what happens.

1 Like

Sorry, this is just me kind of going on a tangent, expressing my opinions, and hypothesizing.

To start off, OptiX temporal denoising is great. The temporal option does increase temporal stability, however, it doesn’t live up to my expectations. I’m just going to talk about where I think OptiX temporal denoising falls short.

  1. I probably had too high of expectations. For almost everything I do in Blender, it’s denoised with ODIN. And I do this because from my personal experience, OIDN does a better job at denoising than OptiX. It produces fewer AI blotches, achieves a more accurate brightness (when rendered at a really low sample count), and seems to preserve more detail. And yes, I do have OptiX setup to use colour, albedo, and normals. Now, because of OIDN better image quality and my use of it for generally everything, I kind of had this expectation that OptiX would match that plus have temporal stability. This is not the case.
  2. OptiX doesn’t appear to take enough temporal information into account. In the documentation for OptiX it has a small note about how the temporal denoiser works. It takes information the current frame, motion vectors, and either the previous or next frame and tries to produce a temporally stable result. This means OptiX has a temporal memory of 1 frame. This is better than 0 frames (no temporal denoising), but Disney has shown that scenes (even without motion) can benefit from 3 to 7 frames of temporal memory (Source: Check the video or Page 10 of the PDF). I personally believe OptiX needs a larger “temporal memory” to achieve a better result. And hopefully that will come in a future update.
  3. OptiX can be trained with various different models to handle different scenarios. It’s possible the default model in OptiX isn’t well suited for the noise produce by Cycles, or that the temporal denoising model just isn’t mature enough to be of a high quality. This sort of thing will simply be fixed with future updates or by having someone train a new model (I’m fairly sure OptiX has an option to train and use alternative models). Also, AI is complex with many different settings you can adjust. OptiX has simplified this process down to “Input images and press denoise”. As a result we lose a lot of ability to tweak settings to get the perfect result for a certain scene, same goes for OIDN. But as a plus side, both are easy to use and aren’t intimidating to new users.

I don’t know, I’m just ranting about things.

As a side note, I’m currently running more tests with OptiX denoising. May post results soon?

Yes, in my tests I also get better results with OIDN for still images, but it still has the same problems for animation as well. OptiX’s main advantage is that it is faster running on GPU.

Well, this is the first version of OptiX denoiser with temporal denoising feature. We could expect improvements from nvidia with subsequent releases, right?

1 Like

Most likely yes.

As a side note, I ran a test at 16, 50, 100, and 300 samples with the classroom scene where I used material overrides to replace everything with the “default principled shader”. As expected, the ability of the temporal denoiser to produce a clean and stable image is greatly increased as it’s not trying to retain texture detail. However, areas that still had detail (in the form of geometric detail) still had issues. This could just be a limitation of the temporal denoiser or the motion vector issue we’ve kind of been stepping around.

And doing some simple tests, it seems like it might be the motion vector issue… The motion vectors seem to be backwards… This all depends on how you interpret the data, but here’s what I found:

With my interpretation of the data, OptiX is expecting motion vectors that give you the direction the pixles have to move to get to the next frame. Cycles is giving you the direction the pixels have to move to get to the previous frame.

I could be wrong on this. But if this is true, it seems like this may be one of our major problems.

1 Like

Nice tests @Alaska, thanks. My opinion:
I am quite disappointed from this Optix temporal feature, because it simply doesn’t clear the issue, it makes it just a bit less evident. The trembling pixels are still there and not only: the image pays a cost in terms of textures sharpness. Check the zoomed clips.
I’m speaking of 300 samples tests, because 100 are “unsellable”… and overall at 300 I almost prefer the noisy clip!

It’s possible my tests were less than ideal due to an issue with Motion vectors. I’m testing a “fix” and if it’s any better I’ll upload a video demonstrating the results. If it’s not, then I’ll probably just make a comment about it.

But I generally agree with you, OptiX temporal denoising, as I’ve seen so far from my own (potentially flawed) tests, just isn’t that great. It’s better than nothing in cases like the 300 sample scene, but it’s just not as good as it could be. Hopefully this will be fixed with updates.

Keep in mind that we still do not know if we are using this “flow” or vector pass correctly. It would be good if Brecht, Lukas or Stefan are reading this and have some time they could give us some clue about it.

In general, adaptive sampling results aren’t going to be that different from non-adaptive sampled renders. The idea is that adaptive sampling is supposed to produce a image with a noise level similar to when the feature is turned off but with increased performance by not sampling where it’s not needed.

I could run tests with different noise thresholds in Blender, but at the moment results from OptiX temporal denoising aren’t that great, even with adaptive sampling turned off, so these tests may have to wait until later.

In the meantime I’m running a test, I made small adjustments to the animation in the scene, and am rendering it normally, but rendering the “flow” pass with the animation reversed. I can then flip the flow pass naming scheme and in theory it should be what OptiX temporal denoising expects…

However, there could still be issues with the data being in the wrong format? Not sure. We’ll find out soon.

1 Like

I have found this documentation from nvidia about motion vectors or optical flow:
https://developer.nvidia.com/blog/tag/motion-vectors/

And this:

I don’t know if it could be useful to you, I don’t understand a word :slight_smile:

Yes, it’s not really about that directly (though the use of pmj sampling may show more or less splotchy patterns; I haven’t been a fan of pmj really). The goal is to understand at what noise threshold the denoisers become… “acceptible”. Adaptive is the only tool we have right now to measure some form of matematical noise level during a render. And it may be the primary tool of the user in the future.

Having the raw sample counts from your experiments is still useful, but they’re not transferable to other scenes directly and it’s difficult to understand how much further you’d have to go to get acceptible results.

Okay, that makes more sense. I’ll look into that once I’ve figured out this motion vector thing.

1 Like

That looks way more like what OptiX is expecting. I’ll investigate it further and see what I can get out of it.