OIDN Viewport Denoiser recalculates too frequently, reducing viewport render performance drastically

This issue has been present for months and appears to be still on track to make it to official 2.90 release.

When viewport denoising is set to OIDN, the OIDN recalculates result every single sample, resulting in significant reduction in viewport rendering performance. For example in a very trivial exterior scene, viewport render without denoising takes 4 seconds to reach 32 samples while as soon as denoising is enabled, it takes 12 seconds to reach the same amount of samples. And this is not really on a weak CPU either. It’s running on Ryzen 3900x.

The Start Sample parameter is really not helpful, because it only starts denoising after given sample count is reached.

What’s needed here is some reasonable update rate for the OIDN viewport denoiser, so it doesn’t completely bottleneck viewport rendering. Either time based (every X seconds) or sample based (every X samples). But performing full OIDN denoise every frame results in way more CPU time being used for denoising than the actual rendering, and in case of GPU rendering, GPU being idle and waiting for the CPU to denoise most of the time.

This really diminishes most of the benefit OIDN is supposed to bring.

6 Likes