Cycles AMD HIP device feedback

This is a topic for feedback on Cycles AMD GPU rendering in Blender 3.0.

See this blog post for more information.

And the release notes.

For AMD GPUs, there is a new backend based on the HIP platform. In Blender 3.0, this is supported on Windows with RDNA and RDNA2 generation discrete graphics cards. It includes Radeon RX 5000 and RX 6000 series GPUs. Driver version Radeon Pro 21.Q4 or newer is required.

We are working with AMD to add support for Linux and investigate earlier generation graphics cards, for the Blender 3.1 release.

Please report any bugs to the tracker:
https://developer.blender.org/maniphest/task/edit/form/1/

6 Likes

There is currently a known bug with OpenVDB and smoke volume rendering:
https://developer.blender.org/T92984
https://developer.blender.org/T93045

Bug reports should be made to the Blender bug tracker. The best way to access it is to open Blender and select from the top of Blender Help -> Report a bug and filling out the relevant information.

I would also like to note that I personally can not reproduce this issue with CPU or GPU rendering on the latest Master in Linux with a Nvidia GPU. So you might want to provide the relevant .blend file and associated files when filing your bug report.

I use the cpu rendering is still white, may be today’s 3.0 beta problem, my is windows, the project file is the standard classroom

Then there is a bunch of factors that could be impacting this. I would recommend reporting it to the bug tracker just in case it is a bug that needs to be investigated.

I have compiled the latest available Blender 3.1 devel version on Arch Linux with rocm 4.5 installed with the changes from this commit.

https://developer.blender.org/rB01f39ef89d40bd8dfced3efbe1c9007a0c6532cc

I have Vega 64 and I use the kernel driver, I have changed these options specifically:

CmakeLists.txt
# AMD HIP
if(WIN32)
option(WITH_CYCLES_DEVICE_HIP "Enable Cycles AMD HIP support" ON)
else()
option(WITH_CYCLES_DEVICE_HIP "Enable Cycles AMD HIP support" ON)
endif()
option(WITH_CYCLES_HIP_BINARIES "Build Cycles AMD HIP binaries" ON)
set(CYCLES_HIP_BINARIES_ARCH gfx900 gfx1010 gfx1011 gfx1012 gfx1030 gfx1031 gfx1032 gfx1034 CACHE STRING "AMD HIP architectures to build binaries for")
mark_as_advanced(WITH_CYCLES_DEVICE_HIP)
mark_as_advanced(CYCLES_HIP_BINARIES_ARCH)

util.h
return (major > 7) || (major == 7 && minor >= 0);

It compiled without any errors and installed successfully.

When I go to Edit > Preferences > System and select HIP Blender crashes with Segmentation fault (core dumped) error.

blender.crash.txt
# Blender 3.1.0, Commit date: 2021-11-16 09:16, Hash d4c868da9f97

# backtrace
blender(BLI_system_backtrace+0x34) [0x5569b63a0d34]
blender(+0xf1458d) [0x5569b3dee58d]
/usr/lib/libc.so.6(+0x3cda0) [0x7f5a5afe4da0]
blender(hipGetDeviceProperties+0) [0x5569b7f3f398]

# Python backtrace
File "/usr/share/blender/3.1/scripts/addons/cycles/properties.py", line 1355 in get_devices_for_type
File "/usr/share/blender/3.1/scripts/addons/cycles/properties.py", line 1445 in draw_impl
File "/usr/share/blender/3.1/scripts/startup/bl_ui/space_userpref.py", line 612 in draw_centered
File "/usr/share/blender/3.1/scripts/startup/bl_ui/space_userpref.py", line 182 in draw

Why is it crashing and how can it be fixed?

rocminfo output:
Agent 2
*******
Name: gfx900
Uuid: GPU-0215087234ce2984
Marketing Name: AMD Radeon RX Vega
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 4096(0x1000)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 4096(0x1000) KB
Chip ID: 26751(0x687f)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 1630
BDFID: 2304
Internal Node ID: 1
Compute Unit: 64
SIMDs per CU: 4
Shader Engines: 4
Shader Arrs. per Eng.: 1
WatchPts on Addr. Ranges:4
Features: KERNEL_DISPATCH
Fast F16 Operation: FALSE
Wavefront Size: 64(0x40)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 2560(0xa00)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx900:xnack-
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***

1 Like

@FogLizard, it’s currently not working on Linux or with Vega graphics cards, only Windows and RDNA. There are issues in the HIP compiler and/or driver that need to be solved.

Where can I track this issue? I don’t see it mentioned here:

https://developer.blender.org/T91571

It’s “Linux support stabilized and working driver available” and “Support for GPUs older than RDNA architecture”.

I can specifically mention the hipGetDeviceProperties crash though so it’s clear that is known.

I see you have already added it, thanks. I will be doing further testing as I frequently build the latest git devel version.

The question mark in “Support for GPUs older than RDNA architecture?” raises a lot of concern. I also have an RX560 Polaris GPU, but I haven’t tested it yet with HIP.

1 Like

We understand that people are excited to see HIP support on older GPUs. This is being looked at. But we cannot make promises here.

2 Likes

For sure, as I am not looking forward spending 2000-2500 USD on a new better GPU in current market affected by chip and component shortage, mining craze and super high demand and low supply to replace a perfectly capable GPU that I’ve got for a fraction of current prices. I have a feel that I am definitely not going to be the only one with this issue when Blender 3.x is officially released.

3 Likes

Yep. Same here. I’ve got two Radeon VII that apart from viewport handling are completely unused in Blender.
And for what I paid for them BOTH I could buy HALF of one of W6800 / RX6900XT or 3080Ti :joy:.
It’s bonkers.

But I understand that newer GPUS have the priority when it comes to support.

1 Like

Answering a question from https://developer.blender.org/T91571.

  • There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.
  • I don’t think OpenCL performance was behind CUDA in general, mainly stability was the issue. There are some initial benchmarks here, but we have not done controlled comparisons with CUDA. User:ThomasDinges/AMDBenchmarks - Blender Developer Wiki
5 Likes

When you mention driver updates, are you talking about Windows? I am using free Linux kernel driver, and I am pretty sure it is updated regularly along with the kernel. I’ve seen Pro drivers mentioned, but I’ve been able to render with OpenCL in Blender 2.8-2.9 with my current Linux setup with the free kernel driver and opencl package from AUR. I’ve moved to Rocm 4.5 to test this and it seems to have all the hip stuff already.

I am already running 3.1 devel version from git, is there a place where I can keep an eye on Vega support development progress for 3.1?

2 Likes

There will be no Vega support in 3.0, any additional GPU generation support will be for 3.1 and may require a new AMD driver release.

Thanks for answering this! Is there some place where we can follow what new devices are being added? Or will they be an announcement whenever there’s a new generation supported? Is it known how many generations back are they aiming to be supported?

I have a RX570, and like someone else said, it would be a bummer not being able to use our graphics cards just because we bought them a year too long ago (specially considering the current situation with graphics cards prices and availability).

1 Like

For following HIP development, you could subscribe to this task, though there will be many updates unrelated to Vega.
https://developer.blender.org/T91571

As for the driver (and compiler), I refer to both Windows and Linux. There can be bugs or limitations in them that need to be fixed before things work. For example I think the crash you encountered is almost certainly something that needs to be fixed in the Linux driver.

3 Likes

I recently got a RX6600XT and this is amazing news, my Vega 64 is still in my wife’s computer so I hope that we get support for them also.

HIP works great in blender, a lot less unstable then OpenCL and pretty fast :slightly_smiling_face:
I rendered the classroom blend file at 300 samples in 1min 28 seconds on the RX6600XT in 3.0
In blender 2.93 at 64Ă—64 pixels it rendered in 2 min 40 seconds

At 300 samples and 64Ă—64 tile size the Vega 64 rendered clasroom at 4 min 55 seconds
At 300 samples and 512Ă—512 tile size the Vega 64 rendered clasroom at 3 min 01 seconds
At 300 samples and 512Ă—512 tile size the RX 6600 XT rendered clasroom at 2 min 36 seconds

All rendered in 2.93 in openCL

Thanks AMD and Blender foundation.

1 Like
  1. Thank you for clarifying. No issues, I can wait for support (for Vega) to be added in 3.1 alpha. Just a quick reminder, that many people are stuck on older GPUs because of ongoing inflated prices of GPUs.

  2. My claim of OpenCL being behind was based on an aggregate look at blender benchmark data available on the official website. Sometime back I had carefully looked at the general render times difference for each scene in blender benchmark, and for almost every scene… equivalent Nvidia GPUs (like 1080 vs Vega64, 3080 vs 6800XT) … was in the favour of Nvida (CUDA) by almost 2x. And Optix times were even faster. This is also inline with the general chatter in the blender community about Nvidia cards being much better for Blender.

But yeah… obviously the stability was a big issue like you said. Thanks for replying the questions here, and linking it in developer.blender thread.

1 Like

I don’t think that is true, in most cases OpenCL vs CUDA performance is comparable between equivalent cards, if anything according the opendata.blender.org AMD is somewhat faster than the Nvidia comparable card.

Notice how the Vega 64 favors a bigger tile, so benchmarks should always be took with a grain of salt.

I’ve been a long time OpenCL user, (rx 480, Vega 64). Performance was never my issues, it was the stability and the separate kernel loading that was annoying and HIP is amazing on the RX6600XT, can’t wait to see the Vega with HIP.

2 Likes