For AMD GPUs, there is a new backend based on the HIP platform. In Blender 3.0, this is supported on Windows with RDNA and RDNA2 generation discrete graphics cards. It includes Radeon RX 5000 and RX 6000 series GPUs. Driver version Radeon Pro 21.Q4 or newer is required.
We are working with AMD to add support for Linux and investigate earlier generation graphics cards, for the Blender 3.1 release.
Bug reports should be made to the Blender bug tracker. The best way to access it is to open Blender and select from the top of Blender Help -> Report a bug and filling out the relevant information.
I would also like to note that I personally can not reproduce this issue with CPU or GPU rendering on the latest Master in Linux with a Nvidia GPU. So you might want to provide the relevant .blend file and associated files when filing your bug report.
Then there is a bunch of factors that could be impacting this. I would recommend reporting it to the bug tracker just in case it is a bug that needs to be investigated.
# Python backtrace File "/usr/share/blender/3.1/scripts/addons/cycles/properties.py", line 1355 in get_devices_for_type File "/usr/share/blender/3.1/scripts/addons/cycles/properties.py", line 1445 in draw_impl File "/usr/share/blender/3.1/scripts/startup/bl_ui/space_userpref.py", line 612 in draw_centered File "/usr/share/blender/3.1/scripts/startup/bl_ui/space_userpref.py", line 182 in draw
Why is it crashing and how can it be fixed?
rocminfo output: Agent 2 ******* Name: gfx900 Uuid: GPU-0215087234ce2984 Marketing Name: AMD Radeon RX Vega Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 4096(0x1000) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 4096(0x1000) KB Chip ID: 26751(0x687f) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 1630 BDFID: 2304 Internal Node ID: 1 Compute Unit: 64 SIMDs per CU: 4 Shader Engines: 4 Shader Arrs. per Eng.: 1 WatchPts on Addr. Ranges:4 Features: KERNEL_DISPATCH Fast F16 Operation: FALSE Wavefront Size: 64(0x40) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 40(0x28) Max Work-item Per CU: 2560(0xa00) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 8372224(0x7fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx900:xnack- Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done ***
@FogLizard, it’s currently not working on Linux or with Vega graphics cards, only Windows and RDNA. There are issues in the HIP compiler and/or driver that need to be solved.
I see you have already added it, thanks. I will be doing further testing as I frequently build the latest git devel version.
The question mark in “Support for GPUs older than RDNA architecture?” raises a lot of concern. I also have an RX560 Polaris GPU, but I haven’t tested it yet with HIP.
For sure, as I am not looking forward spending 2000-2500 USD on a new better GPU in current market affected by chip and component shortage, mining craze and super high demand and low supply to replace a perfectly capable GPU that I’ve got for a fraction of current prices. I have a feel that I am definitely not going to be the only one with this issue when Blender 3.x is officially released.
Yep. Same here. I’ve got two Radeon VII that apart from viewport handling are completely unused in Blender.
And for what I paid for them BOTH I could buy HALF of one of W6800 / RX6900XT or 3080Ti .
It’s bonkers.
But I understand that newer GPUS have the priority when it comes to support.
There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.
I don’t think OpenCL performance was behind CUDA in general, mainly stability was the issue. There are some initial benchmarks here, but we have not done controlled comparisons with CUDA. User:ThomasDinges/AMDBenchmarks - Blender Developer Wiki
When you mention driver updates, are you talking about Windows? I am using free Linux kernel driver, and I am pretty sure it is updated regularly along with the kernel. I’ve seen Pro drivers mentioned, but I’ve been able to render with OpenCL in Blender 2.8-2.9 with my current Linux setup with the free kernel driver and opencl package from AUR. I’ve moved to Rocm 4.5 to test this and it seems to have all the hip stuff already.
I am already running 3.1 devel version from git, is there a place where I can keep an eye on Vega support development progress for 3.1?
There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.
Thanks for answering this! Is there some place where we can follow what new devices are being added? Or will they be an announcement whenever there’s a new generation supported? Is it known how many generations back are they aiming to be supported?
I have a RX570, and like someone else said, it would be a bummer not being able to use our graphics cards just because we bought them a year too long ago (specially considering the current situation with graphics cards prices and availability).
For following HIP development, you could subscribe to this task, though there will be many updates unrelated to Vega. https://developer.blender.org/T91571
As for the driver (and compiler), I refer to both Windows and Linux. There can be bugs or limitations in them that need to be fixed before things work. For example I think the crash you encountered is almost certainly something that needs to be fixed in the Linux driver.
I recently got a RX6600XT and this is amazing news, my Vega 64 is still in my wife’s computer so I hope that we get support for them also.
HIP works great in blender, a lot less unstable then OpenCL and pretty fast
I rendered the classroom blend file at 300 samples in 1min 28 seconds on the RX6600XT in 3.0
In blender 2.93 at 64Ă—64 pixels it rendered in 2 min 40 seconds
At 300 samples and 64Ă—64 tile size the Vega 64 rendered clasroom at 4 min 55 seconds
At 300 samples and 512Ă—512 tile size the Vega 64 rendered clasroom at 3 min 01 seconds
At 300 samples and 512Ă—512 tile size the RX 6600 XT rendered clasroom at 2 min 36 seconds
Thank you for clarifying. No issues, I can wait for support (for Vega) to be added in 3.1 alpha. Just a quick reminder, that many people are stuck on older GPUs because of ongoing inflated prices of GPUs.
My claim of OpenCL being behind was based on an aggregate look at blender benchmark data available on the official website. Sometime back I had carefully looked at the general render times difference for each scene in blender benchmark, and for almost every scene… equivalent Nvidia GPUs (like 1080 vs Vega64, 3080 vs 6800XT) … was in the favour of Nvida (CUDA) by almost 2x. And Optix times were even faster. This is also inline with the general chatter in the blender community about Nvidia cards being much better for Blender.
But yeah… obviously the stability was a big issue like you said. Thanks for replying the questions here, and linking it in developer.blender thread.
I don’t think that is true, in most cases OpenCL vs CUDA performance is comparable between equivalent cards, if anything according the opendata.blender.org AMD is somewhat faster than the Nvidia comparable card.
Notice how the Vega 64 favors a bigger tile, so benchmarks should always be took with a grain of salt.
I’ve been a long time OpenCL user, (rx 480, Vega 64). Performance was never my issues, it was the stability and the separate kernel loading that was annoying and HIP is amazing on the RX6600XT, can’t wait to see the Vega with HIP.