From my understanding, for each sample of the viewport, the render is split evenly between the selected devices. This works well for devices of similar performance levels, but once you get to performance differences like the device setup I listed above, it just causes a lot of waited on the part of the faster device.
The current work around is to disable the lower power GPU/CPU until you come to the final render where you re-enable it. However, what’s the possibility of seeing an option added to Blender allowing the user to specify the set of GPUs to use for viewport rendering and the GPUs to use for final renders?
At the moment, using Optix with two different GPU it seems to perform the same as using the fastest of these GPU alone. So either viewport performance isn’t improved by a 2nd GPU, or it requires two identical GPU.
Could you please specify what happens, and if it’s a reproducible issue, please submit a bug report by using Blender’s Help menu ➔ Report a bug. Your system info will be auto-inserted in the online form.
If Blender doesn’t work at all, you can use this link to submit a bug report.
Sorry, Youtube did not notify me of your answer. What happens is that when rendering in real time in Viewport, the message appears informing the error. Image in the description. It happens both in this application, and without it.
I’m pretty sure this is some issue related to Windows GPU hardware -accelerated scheduling. Go to Settings -> System -> Display -> Graphics Settings, turn off Hardware-accelerated GPU scheduling, then restart. It should work fine after that.