Cycles OptiX

No, they don’t have to share the BVH.

1 Like

Hum… there is something wrong then in the Embree Gsoc, because he is converting from BVH4 to BVH2 and he says that in case we use CPU+GPU with Embree then we would need to use BVH2 for everything.

It’s good news that you say it’s not needed :slight_smile:

And what do you think about the possibility to analyze the render in the middle for things like a more edvanved adaptive sampling? (I may be wrong there too)

1 Like

If adaptive sampling needs some synchronization between tiles it can be added, similar to what we already do for denoising. We can implement those things when needed for a particular algorithm, but it’s unrelated to BVHs.

3 Likes

Good to know, thanks for this!
I’ll start another conversation to ask you another thing about AS, but I don’t want to derail this thread anymore, it has been too much.

1 Like

No problem, @JuanGea. I’ve started this thread for good questions like yours, and good answers like @brecht provides. :+1:

3 Likes

Thanks @MetinSeven :slight_smile:
I’ve continued the conversation in this thread:

Just for the shake of avoid users thinking that all that I’m saying about adaptive sampling or other things is directly related to Optix (just in case) :slight_smile:

2 Likes

A little in-between message:

Brecht wrote this about OptiX for Cycles on developer.blender.org:

Note that I would like this to replace the CUDA backend eventually as the officially supported way to render on NVIDIA GPUs. There’s not much point maintaining multiple backends if we can get to feature parity and support all the same cards.

It’d be great if OptiX could fully replace CUDA in the near future, making Cycles rendering settings for NVIDIA GPUs clear again, with only one (the best) option. :+1:

1 Like

Yep, the only thing that worries me about that is that Optix may not be supported by 9xx card series, just 10xx series, and the 9xx series still works pretty well, 2x980m with 8Gb each have a bit more speed than a single 1080 with 8Gb :slight_smile:

But I’m not sure about what do Optix support

1 Like

I’ve got another question for @brecht:

In this Blender Artists topic about Cycles OptiX, some posts mention that there’s no difference between rendering with a fast GPU and rendering with CPU + GPU.

I always thought when CPU + GPU is activated, Cycles would utilize the strengths of the CPU and the GPU to distribute rendering across both. E.g. GPU for floating point calculations, CPU for other calculations, both working simultaneously to speed up the rendering process. Hence I thought CPU + GPU would always be faster than only GPU, no matter how fast your GPU is.

Am I wrong about this?

I think I can answer you, at least partially :slight_smile:

Cpu and GPU distribute work, but it’s not “part” of the work, it’s divided by tiles, so if you have 8 super slow cores and 1 super fast GPU you will get 9 tiles being rendered at the same time, but you will end up waiting for the CPU tiles to finish while the GPU does all the job.

So CPU+GPU is what is seems to be, CPU rendering some tiles and GPU rendering some tiles, I think that’s the correct way because doing some calculations on the fly on the CPU and sending them to the GPU would be too slow in the end, but of course if your CPU is slow, it will not be worth the speed gain, because you may not have any speed gain.

2 Likes

Speaking about cpu+gpu, there’s a lot of people’s in the forums who use high end gpu(s) who finds it slower to use hybrid rendering, even with top of the line threadripper machine.

Edit: nevermind, it was already discussed above.

I use a 1080(we cannot consider it top of the line but it’s not bad) and a 2990WX, you have to use a small tile size, 16x16, and in general I always get better times

It’s clear that when you compare the two parts together it seems logical that you don’t see your cpu slowing down your gpu :joy:

I was more thinking of a reasonable price top of the line cpu then like the 2950x or 1950x. (Although we can agree that is not top of the line anymore).

I consider my 1950x as a really good cpu but still it cannot handle the speed of my 2080ti.

Leaving my 2080ti alone is giving me way better bench, otherwise I always have to wait for thoses last threadripper tiles to finish and in comparison it take just too much time.

Edit: nevermind, it was already discussed above.

Yes, in general a 2990WX is as fast as a 2080, the thing is that you have to use small tile size to be sure that the slower machine does not slow down the whole render, it should be faster in that situation, if it’s not something may be happening, because even when the ripper is slower it has many cores, and the GPU is just one, all the tiles in the ripper must be rendered faster than just 1 tile of the GPU, so it must accelerate the render no matter how, unless the CPU is suuuuuper slow, like an old i5 or i3, but with a ripper it should not be slower no matter what.

1 Like

Yes on small tiles of course !

Im not the only one who find out that hybrid rendering is too slo for them, there’s multiple RTX users who also think that, on the blender artist optix thread.

Well, now that @brecht is going to focus on Cycles i’m sure a lot of things will improve, hopefully also in this front :slight_smile:

1 Like

I also heard that Mathieu, the creator from e-cycles might join the team and officially working for blender cycles. (???)

1 Like

That would be great. He is such a nice and kind guy. Does anyone know if cycles will get viewport denoise that work also with amd cards?

1 Like

I’d find that surprising, but would welcome it. I read a lot about the speed of E-Cycles, but I’ve never bought it, because I don’t want to be dependent on a paid third-party version of Cycles.

x4 speed boost is quite a serious achievement.

I’d be really happy to hear his arrival on the team unfortunately i heard that his work was never accepted for the main build for reasons i don’t understand yet. It’s quite sad we could have a native engine as fast as octane.