Cycles AMD HIP device feedback

Where can I track this issue? I don’t see it mentioned here:

https://developer.blender.org/T91571

It’s “Linux support stabilized and working driver available” and “Support for GPUs older than RDNA architecture”.

I can specifically mention the hipGetDeviceProperties crash though so it’s clear that is known.

I see you have already added it, thanks. I will be doing further testing as I frequently build the latest git devel version.

The question mark in “Support for GPUs older than RDNA architecture?” raises a lot of concern. I also have an RX560 Polaris GPU, but I haven’t tested it yet with HIP.

1 Like

We understand that people are excited to see HIP support on older GPUs. This is being looked at. But we cannot make promises here.

2 Likes

For sure, as I am not looking forward spending 2000-2500 USD on a new better GPU in current market affected by chip and component shortage, mining craze and super high demand and low supply to replace a perfectly capable GPU that I’ve got for a fraction of current prices. I have a feel that I am definitely not going to be the only one with this issue when Blender 3.x is officially released.

3 Likes

Yep. Same here. I’ve got two Radeon VII that apart from viewport handling are completely unused in Blender.
And for what I paid for them BOTH I could buy HALF of one of W6800 / RX6900XT or 3080Ti :joy:.
It’s bonkers.

But I understand that newer GPUS have the priority when it comes to support.

1 Like

Answering a question from https://developer.blender.org/T91571.

  • There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.
  • I don’t think OpenCL performance was behind CUDA in general, mainly stability was the issue. There are some initial benchmarks here, but we have not done controlled comparisons with CUDA. User:ThomasDinges/AMDBenchmarks - Blender Developer Wiki
5 Likes

When you mention driver updates, are you talking about Windows? I am using free Linux kernel driver, and I am pretty sure it is updated regularly along with the kernel. I’ve seen Pro drivers mentioned, but I’ve been able to render with OpenCL in Blender 2.8-2.9 with my current Linux setup with the free kernel driver and opencl package from AUR. I’ve moved to Rocm 4.5 to test this and it seems to have all the hip stuff already.

I am already running 3.1 devel version from git, is there a place where I can keep an eye on Vega support development progress for 3.1?

2 Likes

There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.

Thanks for answering this! Is there some place where we can follow what new devices are being added? Or will they be an announcement whenever there’s a new generation supported? Is it known how many generations back are they aiming to be supported?

I have a RX570, and like someone else said, it would be a bummer not being able to use our graphics cards just because we bought them a year too long ago (specially considering the current situation with graphics cards prices and availability).

1 Like

For following HIP development, you could subscribe to this task, though there will be many updates unrelated to Vega.
https://developer.blender.org/T91571

As for the driver (and compiler), I refer to both Windows and Linux. There can be bugs or limitations in them that need to be fixed before things work. For example I think the crash you encountered is almost certainly something that needs to be fixed in the Linux driver.

3 Likes

I recently got a RX6600XT and this is amazing news, my Vega 64 is still in my wife’s computer so I hope that we get support for them also.

HIP works great in blender, a lot less unstable then OpenCL and pretty fast :slightly_smiling_face:
I rendered the classroom blend file at 300 samples in 1min 28 seconds on the RX6600XT in 3.0
In blender 2.93 at 64×64 pixels it rendered in 2 min 40 seconds

At 300 samples and 64×64 tile size the Vega 64 rendered clasroom at 4 min 55 seconds
At 300 samples and 512×512 tile size the Vega 64 rendered clasroom at 3 min 01 seconds
At 300 samples and 512×512 tile size the RX 6600 XT rendered clasroom at 2 min 36 seconds

All rendered in 2.93 in openCL

Thanks AMD and Blender foundation.

1 Like
  1. Thank you for clarifying. No issues, I can wait for support (for Vega) to be added in 3.1 alpha. Just a quick reminder, that many people are stuck on older GPUs because of ongoing inflated prices of GPUs.

  2. My claim of OpenCL being behind was based on an aggregate look at blender benchmark data available on the official website. Sometime back I had carefully looked at the general render times difference for each scene in blender benchmark, and for almost every scene… equivalent Nvidia GPUs (like 1080 vs Vega64, 3080 vs 6800XT) … was in the favour of Nvida (CUDA) by almost 2x. And Optix times were even faster. This is also inline with the general chatter in the blender community about Nvidia cards being much better for Blender.

But yeah… obviously the stability was a big issue like you said. Thanks for replying the questions here, and linking it in developer.blender thread.

1 Like

I don’t think that is true, in most cases OpenCL vs CUDA performance is comparable between equivalent cards, if anything according the opendata.blender.org AMD is somewhat faster than the Nvidia comparable card.

Notice how the Vega 64 favors a bigger tile, so benchmarks should always be took with a grain of salt.

I’ve been a long time OpenCL user, (rx 480, Vega 64). Performance was never my issues, it was the stability and the separate kernel loading that was annoying and HIP is amazing on the RX6600XT, can’t wait to see the Vega with HIP.

2 Likes

The benchmarks on the Blender Open Data site uses tile sizes of 512x512 when rendering with the GPU.

When I did my benchmarks User:ThomasDinges/AMDBenchmarks - Blender Developer Wiki the victor scene was not working for me, as it didn’t fit into my GPU. (8GB VRAM). With a new build it looks like out of core rendering is now working. :slight_smile:

5 Likes

Nice!
Just want to ask if there is any performance penalty with out-of-core GPU rendering?

Theoretically there should be as with out-of-core rendering the GPU needs to request some of the information required for rendering from RAM rather than VRAM. And typically the speed at which it can request that data from RAM is slower than requesting it from VRAM, and as a result you will get reduced performance.

1 Like

It can be very high. See the note here: Reference/Release Notes/2.80/Cycles - Blender Developer Wiki

1 Like

Thank you both. So this looks like a very specific use case. Would be interesting to benchmark many scenarios.

I have an iMac Pro Vega 64. I assume that if and when Vegas are supported Macs can’t ever be because no driver can be written. Is that right?