Blender 2.8: Cycles Optix on non-RTX card

What do you think about an enabling Optix 7 for non-RTX card?

There is few lines in device_optix.cpp which could be disabled (or commented):

    // Only add devices with RTX support
    //if (rtcore_version == 0)
    // it = cuda_devices.erase(it);
    //else

It works for GTX970 (rendering time is slower) and V100 (rendering time is same) on Windows and Linux.

7 Likes

Oh, thanks for this, I’m going to do that in our build right away!

In my case, a simple scene went from 30 seconds in CUDA to 27 seconds in OPTIX, not awesome… but faster :slight_smile: and theoretically now I can use a 2080Ti with a 1080Ti for example :slight_smile:

There may be some bugs, but we will have to find out :slight_smile:

EDIT: BTW to compare render times, use 512x512 as tile size for Optix :slight_smile:

1 Like

Wow, that is awesome!
Why is this not enabled by default?
In my computer I have 3 graphics cards. Recently I swapped one of my three GTX 1080 for an RTX 2080 SUPER. Everything seems to work just fine so far.

In that scene I just tested I get the following results:

1 GTX 1080 on CUDA: 7:47 min
1 GTX 1080 on OPTIX: 7:34 (interesting…)
1 RTX 2080 on CUDA: 4:43
1 RTX 2080 on OPTIX: 2:50 (super interesting…:wink:

1 RTX + 2 GTX on CUDA: 2:13
1 RTX + 2 GTX on OPTIX: 1:46

So, OPTIX yay! If only it had Bevel + AO shader, and most importantly BPT.
But other than that, I think it should be enabled for GTX cards as well!

12 Likes

I’m doing some tests with Optix in older GPU’s and it works also in 980m with 8Gb :smiley:

Have a 970m and only did this for the optix denoiser… works :slight_smile: [also with D6554 applied]

1 Like

Nice find!

Not to completely derail this discussion but I think there’s actually better techniques on the block these days. Raw BPT has issues with certain paths. To overcome some of that, there is Unbiased Photon Gathering (pdf) which is like an unbiased version of Photon mapping. And, instead of a fully new algorithm, it’s also possible to modify BPT (pdf) to make these problematic paths more accessible

Either way though, as far as I know these algorithms are generally really hard to efficiently implement on GPU. The problem I think vaguely is that, while those algorithms are basically just as parallelizable as PT, in that you just add up lots of events from the same basic process to get your end result, it’s hard to predict how each individual pass works out, so you can’t group similar tasks (similar rays) together as well, which means context switching, which GPUs are really bad at.

When Sebastian wrote BPT, he was referring to Branched Path Tracing. Can it be that you understood Bidirectional Path Tracing instead?

Oh. My bad :sweat_smile:
That is indeed what I understood

Thank you very much for the tip! It works fantastic here on a GTX 960.

A question. By editing the source code, would it be possible to make denoiser in viewport work only when it reaches “the last sample” configured in Sampling > Viewport (Not after each sample)?

Edit:
I just found the option a bit hidden in “Performance > Viewport > Denoising Start Sample"

Another question. Should viewport denoiser work for “Performance > Viewport > Pixel Size” values other than 1?

I just want to know if this doesn’t work on my card because it is a GTX card, or if this may be a bug, or if just that viewport denoiser has not been implemented yet for other viewport resolutions other than 1x.

If you edit the source you will get the GTX card working with Optix and the viewport denoise, I tested it with a GTX970 :slight_smile: (use latest drivers)

This sounds like a bug.

Where is this file ? You need to make a custom build ?

Thank you. I have reported the problem just in case.

@Angelrebirth. You must build with the modifications indicated in the first message.
Or you look for a build that contains those modifications:

There’s no need for a custom build for Optix on GTX cards anymore.

Cycles/Optix: Add CYCLES_OPTIX_TEST override
https://developer.blender.org/rB58ea0d93f193adf84162d736c3c69500584e1aef

Tested with Nvidia 750Ti, viewport denoiser and render with Optix works, between 10 to 20 seconds more faster depending on the scene i try.

3 Likes

To clarify this for people – this isn’t a CMAKE flag, it’s an environment variable (the picture above looks like a cmake-gui window to me? But maybe it’s a gui environment variable editor).

So for example, on my Ubuntu machine, I can set the variable with:
CYCLES_OPTIX_TEST="all" and then, when I run Blender from this console, it will have access to this environment variable, and I’ll be able to attempt to use Optix with my GTX card (works beautifully on GTX1080). Now, if I want to make this persistent so that I don’t have to set the variable everytime, I can export CYCLES_OPTIX_TEST and it will be saved as an environmental variable.

This works on the build-bot builds, ever since the commit was added.

I hope this helps people who were confused, like me. Thanks @LazyDodo for helping me over at Blender.chat, and for making the commit.

edit: am a idiot. ./blender runs blender

1 Like

This is just the KDE properties editor for a launcher.
If you want to manually edit Linux launcher, “Exec =” line should be:
Exec=CYCLES_OPTIX_TEST=1 /BLENDER_EXECUTABLE_PATH_FOLDER/blender

If there are spaces in the folder name, put an asterix * corresponding to the space.

2 Likes

does latest blender 2.83 build disabled the RTX only cards?

Can you give more detail on how to do this? I am new to programming but wanted to get into it with blender. I wanted to start by enabling OptiX on my TitanV, but I have tried a custom mod with suggestions on this thread as well as the links here:
https://wiki.blender.org/wiki/Building_Blender/CUDA
https://developer.blender.org/rB58ea0d93f193adf84162d736c3c69500584e1aef

I’m using windows 64 bit. I tried compiling in Visual Studio 2019.