OptiX render speed vs RTX games/demos render speed. Why are they so different?

When reading the article [https://code.blender.org/2019/07/accelerating-cycles-using-nvidia-rtx/](Accelerating Cycles using NVIDIA RTX), I took a look at the performance chart below:


Then I notice that for all benchmarks, rendering on RTX hardware (OptiX) takes some seconds to complete.

The bmw27, for example, that is the fastest to render:

It renders in ~45 seconds in CUDA, and in ~22 seconds in OptiX.

23 seconds is a really big difference when rendering something (considering it might be much bigger for longer renders), and I’m really happy RTX support is arriving for Blender.

However, I immediatelly associated the OptiX times as below my expectations. But why?

Because with the same card, the so-called “RTX games” can run at 60 frames per second (1920 x 1080).

That means in 1 second, 60 images (frames) are rendered.

Yes, I’m aware these RTX games (except Quake RTX) use the hybrid approach (rasterization + ray tracing), so each game uses RTX hardware for specific purposes (Shadow of the Tomb Raider for shadows, Battlefield V for reflections, and so on…).

However, If we take the game Metro Exodus as an example, it uses real-time ray tracing for global illumination (which I believe is a compute-intensive task).

Yet, it surpasses the average 60 fps (1920 x 1080):

Source: https://wccftech.com/metro-exodus-pc-performance-explored-including-ray-tracing/

Also, there’s the NVIDIA Reflections demo, which can render around 48 frames in just 1 second (1920 x 1080):

That means:

-ray-traced tech demo renders around 48 frames per second
-ray-traced game scenes render 64 frames per second (worst cases, according to the previous wccftech link)
-OptiX-powered Cycles renders 1 frame each ~22 seconds for bmw27 demo.

I can only think the games and demos are using ray tracing just a bit, but yet some of them (like the reflections demo) look very realistic.

Is OptiX the same technology behind demos and games? Or for them, there’s another approach?

I’d like to understand why both performances are so different, assuming both Metro Exodus and demos make use of ray tracing to some perceptible extent.


This is most likely off topic for this forum.

But in short:

  • Games do not ray trace the entire scene. Currently it’s most often used only for shadows, reflections, and AO, or other simple effects (https://youtu.be/gYf33FSxK2s?t=528)
  • For things they do ray-trace, it’s done using very, very few samples (https://youtu.be/gYf33FSxK2s?t=1900)
  • It may not even be done all the time or only when needed e.g. extremely rough surfaces don’t really need it (https://youtu.be/gYf33FSxK2s?t=576)
  • Sometimes there’s limitations where only certain types of geometry is supported (https://youtu.be/gYf33FSxK2s?t=1655)
  • Realtime denoisers are critical and finding one that’s good while supporting all the general purpose cards across the ecosystems that blender supports is difficult

Just watch that entire above presentation for a start to set some expectations. Things are always improving on both sides, games and offline, so it’s a very exciting time either way.

1 Like

In games we’re talking about a budget of 1-2 rays per pixel per frame. This is achieved with lots of trickery - upscaling, denoising, temporal reuse, etc. Since Cycles ray traces everything, it already needs 1 ray per pixel per frame just to determine what’s visible to the camera. Another ray is needed just to determine noisy shadows from a single light source. Throw in antialiasing, depth of field, motion blur, multiple lights, reflections, refractions, SSS and volumetrics, and it ends up at thousands of rays per pixel.

The BMW demo you mention uses 35*35 camera rays, that is more than 1200 rays per pixel just for camera visibility, then multiply that by the number of ray bounces. Compare that to not more than 2 rays per pixel and it should be obvious why games are faster.


EEVEE engine + RTX on is what you are talking about.
We all hope we will see this or a similar technology for eevee in the future.

1 Like

cycles is pure pathtracer… game engines are relatime that is just enhanced with some very limited path traceing. that is why there is such diff.

I also hope Cycles OptiX will be added to the 2.81 master builds soon, so no separate OptiX builds are necessary anymore.

  1. Games don’t do all render passes in path tracing. Most of current RTX games utilize only 1 pass of ray tracing (reflections, or GI, or shadows).
  2. Games don’t raytrace entire scene in those passes. Usually they use realtively small chunk of scene.
  3. Games often use simple baked materials, e.g. they use basic PBR shader with as few instructions as possible, and sample baked textures.
  4. And they get away with low amount of samples and use denoisers to get decent results. Just look at 11:05

This is interesting for this conversation:

Project Lavina, from ChaosGroup, it’s a real time raytraced demo, it seems that it is fully raytraced (really fully raytraced)


we need this for eevee

I don’t think so :slight_smile:

This is like a crippled down cycles, for eevee I think it’s better to get what UE4 has, an hybrid engine that mixes both, rasterizer and raytracing, or we will end up having something like cycles but that it’s now quite cycles :slight_smile:

oh i though it was raster + ratracing like unreal.
what is this magic then ? ray marching + RT cores enabled ?

I don’t know, I only know it’s raytracing in full, no rasterization there, Project Lavina, from Chaosgroup, check for that in google :slight_smile:

It’s also pretty interesting, what amd is planing to do with their new full spectrum.

This will more or less become eevee combined with cycles, in one engine I think.