Problem with multi-GPU rendering viewport performance

I’ve noticed that when you have two or more GPUs of varying levels of performance enabled, a decrease in viewport rendering performance can be observed.

Take for example this setup. A GTX 1050Ti and a RTX 2070 Super.

In the BMW test scene, the viewport render time to 100 samples is as follows:

2070 = 5 seconds
2070 + 1050 = 35 seconds
1050 = 50 seconds

This also occurs with CPU+GPU rendering.

From my understanding, for each sample of the viewport, the render is split evenly between the selected devices. This works well for devices of similar performance levels, but once you get to performance differences like the device setup I listed above, it just causes a lot of waited on the part of the faster device.

The current work around is to disable the lower power GPU/CPU until you come to the final render where you re-enable it. However, what’s the possibility of seeing an option added to Blender allowing the user to specify the set of GPUs to use for viewport rendering and the GPUs to use for final renders?

1 Like

Yes, this is known to not be optimal. Scheduling work between GPUs with different performance is hard for interactive rendering. For final renders the tiles help to distribute it better.

We have some ideas for this, but for now this is expected.

3 Likes

Hi. I’ve created an addon which adresses exactly this issue (by allowing to choose different devices for viewport and final rendering). It’s free. Check it out: https://youtu.be/rIddu96tDYE
Download link:
https://gumroad.com/l/Ocwql

1 Like