Blender 2.8: Cycles Optix on non-RTX card

What do you think about an enabling Optix 7 for non-RTX card?

There is few lines in device_optix.cpp which could be disabled (or commented):

    // Only add devices with RTX support
    //if (rtcore_version == 0)
    // it = cuda_devices.erase(it);
    //else

It works for GTX970 (rendering time is slower) and V100 (rendering time is same) on Windows and Linux.

4 Likes

Oh, thanks for this, I’m going to do that in our build right away!

In my case, a simple scene went from 30 seconds in CUDA to 27 seconds in OPTIX, not awesome… but faster :slight_smile: and theoretically now I can use a 2080Ti with a 1080Ti for example :slight_smile:

There may be some bugs, but we will have to find out :slight_smile:

EDIT: BTW to compare render times, use 512x512 as tile size for Optix :slight_smile:

Wow, that is awesome!
Why is this not enabled by default?
In my computer I have 3 graphics cards. Recently I swapped one of my three GTX 1080 for an RTX 2080 SUPER. Everything seems to work just fine so far.

In that scene I just tested I get the following results:

1 GTX 1080 on CUDA: 7:47 min
1 GTX 1080 on OPTIX: 7:34 (interesting…)
1 RTX 2080 on CUDA: 4:43
1 RTX 2080 on OPTIX: 2:50 (super interesting…:wink:

1 RTX + 2 GTX on CUDA: 2:13
1 RTX + 2 GTX on OPTIX: 1:46

So, OPTIX yay! If only it had Bevel + AO shader, and most importantly BPT.
But other than that, I think it should be enabled for GTX cards as well!

10 Likes

I’m doing some tests with Optix in older GPU’s and it works also in 980m with 8Gb :smiley:

Have a 970m and only did this for the optix denoiser… works :slight_smile: [also with D6554 applied]

1 Like

Nice find!

Not to completely derail this discussion but I think there’s actually better techniques on the block these days. Raw BPT has issues with certain paths. To overcome some of that, there is Unbiased Photon Gathering (pdf) which is like an unbiased version of Photon mapping. And, instead of a fully new algorithm, it’s also possible to modify BPT (pdf) to make these problematic paths more accessible

Either way though, as far as I know these algorithms are generally really hard to efficiently implement on GPU. The problem I think vaguely is that, while those algorithms are basically just as parallelizable as PT, in that you just add up lots of events from the same basic process to get your end result, it’s hard to predict how each individual pass works out, so you can’t group similar tasks (similar rays) together as well, which means context switching, which GPUs are really bad at.

When Sebastian wrote BPT, he was referring to Branched Path Tracing. Can it be that you understood Bidirectional Path Tracing instead?

Oh. My bad :sweat_smile:
That is indeed what I understood