I’m not sure if this is the right place to give feedback. And this have probably been mentioned before. Sorry, I didn’t real all 225 comments.
I’m now running GPU on my iMac Pro Vega 64. I see there’s an option to render with GPU or GPU and CPU in the metal prefs. I though adding the CPU would make a big difference but for render time it’s negligible. But for the memory, If I look at the peak, GPU only took twice as much memory. I don’t understand why.
I was rendering the lone monk demo file.
GPU only: 1271.67, peak 1271.68
CPU and GPU: 479.01M, peak 603.59
I tested the windows environment many times, this scene 300 times the sample is indeed 1 minute and 32 seconds, cpu:5900x data in 4 minutes and 9 seconds, linux if can be completed in 40 seconds that means fast 2.5x
Funnybob, the information found below is still relevant for you. However, for future reference feedback on Metal should go the Metal feedback thread. Cycles Apple Metal device feedback - #340 by S.I
Cycles-X in it’s current form less than ideally schedules work between multiple devices of varying performance in scenes with varying complexity across the frame. As such, enabling CPU + GPU usually doesn’t give people the performance increase they expect.
Investigation is underway to fix this. But until that fix is implemented, that is how Cycles-X works at the moment.
Can we render the animation with separate frames? Even if it’s only 1/5 faster, it saves 1/5 time. The reality is that even if you open 2 software, the previous gpu rendering will stall when the new project chooses cpu rendering
I think that is to be expected. I don’t know about much about HIP or blender’s use of the GPU, but I do have experience with doing GPU calculations in general.
Even if you choose GPU rendering, the cpu is still very busy for part of the render to prepare all the data for the GPU. so if the cpu has to be shared with another process you lose a lot of GPU speed as well while the GPU idles waiting for the CPU to prepare the next batch.
Maybe you could try to lower the priority of the second (cpu-only) instance so that it only uses the ‘left-over’ cpu power?
(But this getting rather offtopic, so I’ll stop here)
Did a few more tests, Blender always crashes immediately if I try to render anything with an image texture. And it’s not caused by the open source ROCm platform as I initially suspected, it fails the exact same way with the official ROCm 5.1.1 binaries. I also tried multiple kernel versions (5.17.4 zen and xanmod, 5.15.35), same result:
The 6800XT runs on opendata.blender score between 600 - 2300. What score does your computer get with your 6800XT on opendata.blender? If you are not getting above 2000 check your card / drivers / computer… If you are getting a 2000+ score then this could be a Blender issue.
I also just noticed that everyone who reported at least limited success uses an RDNA2 card, while Luciddream and I both have crashes and we both use 5700XTs.
I’m pretty sure my 6800xt is running at 2560mhz core frequency and 2120mhz memory,I looked up the online benchmarks, 6800xt in 3.0 is 63s, my 3.1.2 is 97s, it seems to have become slower (wait for the time to re-download the standard project) but compared to 40s linux system still has nearly 50 percent performance improvement.Of course I don’t know how many iterations were used for the review rendering 6800xt rendering slower than expected at 63s - YouTube
I think the review may have used a much smaller number of iterations I remember 2.93 was 150. I used 3.2 alpha rendering once and it took 2 minutes, slower than 3.1.2
With the Techgage 3.0 review they changed the render resolution of classroom to 2560×1440 and tile size to 256×256, I assume they used 300 samples (I got 77 sec when rendering to the Techgage spec for classroom).
Do you know how your Blender benchmark “score” as opposed to classroom render time. You should get a 2000+ “score” for a 6800XT on Windows, Linux or Mac.