RTX GPU Support

Commercial vendors have been working with Nvidia on integrations before RTX was publicly announced, Blender/Cycles wasn’t part of that.

Woah really? i was expecting that blender is supported by nvidia in that case. specily after some ton twitts and talks with N HQ.

considering switch to optix lib. what speedup we can expect can u have some idea like is it just few % or is it some bigger number. i asume that biggest + of going optix is that as soon as new gpu is out we already support it? right? because it is nvidia that makes optix update not we like currently? Or i am missing something. I remember something like that on very old conf when they showed iRay xD

Optix was not compatible with Blender until the RTX announcement, and that was information not available to the public, and not available to Blender (it seems so), so Optix was a big NO until it was implemented in the Drivers, other render engines, like Octane, have been working with Nvidia before the RTX announcement, (before Ton´s tweet) blender has not been able to until the license has been made GPL compatible.

On the other hand, adaptive sampling was giving some bucket errors I think, but I agree that the “experimental” part of Cycles should be more filled with “experimental” or “not totally supported” features, to allow users to play and risk themeselves, in the end the warning is there “experimental” hehe

Regarding Caching, the Corona cache was a joke, not becuase it was bad, but because it was not really needed, not as a technique, but as a “cache”, we never used the cache, it was calculated per frame and it worked flawlessly, photonmapping, I agree with Lukas, as a full render system it could be a big messy, I see it like a possible “caustics” layer solver, but I´m not sure it´s worth the time and effort just for caustics, and there may be some better future solutions, like a proper Bidir solver or others (I´m not sure about that “others” )

But I also agree that some techniques are important, that´s why I propossed to borrow the adaptive sampling technique from Appleseed, since it´s also Opensource, but I´m not sure if it´s entirely possible, since I don´t know how hard could be to implement it in Cycles.

More improvements in performance can make the way into Cycles, but we are getting a lot of improvements every day, so I think it´s a matter of developers being able to dedicate their time to it, the new Dev Fund will help a lot in the long run :slight_smile:

1 Like

I think you may have misinterpreted the Corona talk on Siggraph. They were talking about why denoising of primary brute force bounce was superior to using something like irradiance cache, despite both being quite similar in principle. But that did not concern caching of secondary GI bounces at all, as cached secondary GI is still crucial for getting acceptable rendertimes in scenarios where majority of the scene is lit indirectly (interiors), and that won’t change for many years to come.

In Cycles, we have these unfortunate, quality ruining hacks such as simplify AO bounces feature to fight the utter inefficiency of pure path tracers in such scenarios.

So even if Cycles receives let’s say 3x speed boost with Optix library, the rendertimes for more complex interior scenes (with smaller openings for light to enter) will simply reduce from “unacceptable” to “still unacceptable”. Unless you, again, use one of those quality ruining hacks like simplify AO bounces, or introducing invisible fill lights around the scene.

I’ve made a test thread about it: https://blenderartists.org/t/cycles-performance/1121187
And the conclusion was that if you were rendering on at the time top of the shelf GPU (GTX1080Ti), then if you were to go with pure path tracing, you would have to add 12 more such GPUs to your system to match the performance of single such GPU + cached secondary GI.

4 Likes

Well, yes, that comes back to the online guiding topic a bit - for example, the “Practical Path Guiding” paper caches a spatial radiance estimate which could be used for secondary bounces (they use it for a heuristic that determines path splitting/termination).

Something like that could be added to Cycles, but there are a few implementation questions - for example, the cache is built during rendering, so we’d need to either do a prepass or do some hybrid scheme where the GPU renders as usual while the CPU processes its output and updates the cache. Even then, the noise from the first few samples where no cache is available yet would remain in the image.

Also, as soon as you start to use the cache to directly look up radiance instead of sampling distributions, issues like light bleeding start showing up again. Sure, you can address those, but at that point it starts to become really painful to implement all of that.

Personally, I’d rather stick with denoising approaches and e.g. apply stronger filtering to the indirect light transport components. CNN-based denoising algorithms show great results when it comes to spreading out noisy low-frequency images without blotchy artifacts, so doing e.g. a hybrid scheme with a NFOR-based direct lighting filter and a CNN-based indirect lighting filter might make sense.

A prepass is not a bad idea, as I said before, the way we used Corona was without saving the cache, each frame could have a prepass as long as it does not generates problems, like it was in Corona, if that is enough, it could be a good step forward towards accelerate things while we get a proper adaptive sampling technique, I remember that was a hughe leap in speed for Corona too. (How many times I said Corona? hahaha)

Cheers!

Will RTX cards also be supported in basis for 2.79? I seem to have been a little bit too early with replacing my gtx 970 by an rtx 2080… any options to pass the time? I really need blender working for my current project. I don’t wanna hurry anyone, but maybe someone can show me some alternate solution…

Check Graphicall.org, there is a build that works with Turing, just CUDA though :slight_smile:

I only see it for Linux, is it possible to make it work on Windows?

Oh!

I though it was for windows, but I remembered it wrong.

I don´t have such build, and I can´t build it for you today, my build environment is not right right now.

But here you have a step-by-step tutorial :slight_smile:

Cheers!

1 Like

I for one would really appreciate an official proper Windows 2.79 version with CUDA10 - 2.79c maybe, so that *b remains for older CUDA implementations. The cards are here, 2.8 is far from production ready and a lot of plugins have not been ported yet (or even will not be ported at all).

3 Likes

Could you please explain me better how NVIDIA® OptiX™ could be just a part of the graphic driver and be compatible with GPL?

If the NVIDIA® OptiX™ is just part of “their” proprietary driver, and we will add it to “our” Blender, it could be a shady way to implement proprietary software to a free and open source software with GPL license like Blender.

I understand that not everybody care about the GPL license and for someone it can be seen as limitation.
But i just want to remember that we can call Blender “our software” and that we “own Blender” also for the wise decision to develop Blender under the GPL license, and of course thanks to the fantastic job of @Ton , all developers and community.
So a clear GPL license compatibility ( like OpenSubdiv, Embree, etc…) is very important to support and defend “our Blender” freedom and rights.

Anyway there is an alternative that seems interesting and with a LGPL license:

Optix part of the driver is like OpenGL / DirectX / Metal, but it’s for raytracing rather than rasterization. A pure CUDA ray tracer can’t take advantage of the RTX hardware units.

The latest builds now have support for 20xx GPUs:

Note this is not using OptiX or RTX hardware units, just CUDA.

I have done some basic benchmarking. So far on a Quadro RTX 6000 I got performance that is about as expected, when extrapolating the benchmark timing based on number of cores and clock rate.

6 Likes

thanks a lot :slight_smile:

when will an official statement about all nvidia tech implementation be shared?

about rt cores for cycles? stacking vram between RTX cards for cycles? AI upscaling ? AI Denoizing? Eevee Real time ray tracing? PhysX 4.0?

no news since the tweet two weeks ago

I don´t understand what you say:

“A pure CUDA ray tracer can’t take advantage of the RTX hardware units.”

So cycles is not going to be able to use the RT units?

Not in it’s current state from cuda, we’d have to modify blender to use optiX which can use the RTX hardware.

Ah, Ok, but it will be possible, bretch scared me by a second! hehe

Well it is still a non trivial sizable job, it’s not as easy as a recompile with optix enabled and off we go…