Still need Metal performance numbers in release notes and image for point clouds. Brecht and Thomas handle this.
GPU subdivision still has two bugs, one regarding wireframes (Kévin has a patch under review), another for AMD cards (Jeroen looks into).
MNEE is not working on Metal yet. Brecht looked into it but ran into register limits, will ask Michael from Apple for help. Brecht also still needs to write a first draft of release notes. Christophe/Olivier will provide an example .blend for the release notes.
Christophe asked about spectral rendering. This has been under development here, but still waiting for the developer to submit patches for us to review.
There are now various tasks and patches on multi-device rendering, also including denoising performance. But still waiting for someone to do the work to make a stable implementation that we can use in Blender that’s not just a prototype.
This is a weekly video chat meeting for planning and discussion of Blender rendering development. Any contributor (developer, UI/UX designer, writer, …) working on rendering in Blender is welcome to join and add proposed items to the agenda.
For users and other interested parties, we ask to read the meeting notes instead so that the meeting can remain focused.
No news for Path Guiding, Many Light Sampling, general CPU optimizations?
So far, it seems the only people who have gotten speedups are those willing to shell out for an Nvidia RTX card (while CPU users only get broken promises). Please don’t place these things on the backburner and forget about them.
No news, if there is any it will be in the meeting notes
While we haven’t had time to work on CPU optimizations yet and we are still working on adding support for more GPUs, this has not been specific to RTX at all. There are performance improvements on non-RTX NVIDIA cards, AMD cards and of course on Apple computers for which there was no GPU rendering at all.
I want to second CPU rendering improvements. It has couple advantages that average user might not notice:
It lets render scenes that don’t fit in VRAM. This is an issue for low end machines and laptops that have often low tier GPUs. That kind of machines are often used by students and beginners. VRAM limitation can also be a factor in massive scenes that exceeds memory capacity that is available on GPUs on the current market.
CPU rendering often can be used in the background while user is working on GPU heavy stuff.
Render farms provide CPU rendering option.
Some niche functions like OSL are CPU only (for now).
Tech is not stagnant and we might have 128 core CPUs (or even bigger) on the market soon.
Until HIP support is provided it is the only way for some AMD GPU owners to render scenes in any Blender version from 3.0.
Those are great points! I didn’t know about all of those, so it was enlightening for me. I think, as Brecht said above, that its due to available time that CPUs haven’t gotten more love, yet. But it sounds like in the future there’ll be more love to spread around on those heatsinks.
Cycles X was a speed up for everyone and a solid improvement to most aspects of Cycles, I run multiple CPU only render farms with 100,000s of cores and i’m happy with the updates we have been getting. RTX cards are without a doubt the best hardware for the job for an artist considering cost and raw horsepower, it makes a lot of sense the devs would spend a lot of time making them work well, I guarantee 95% of Blender users have a GPU that is significantly higher tier than their CPU
A massive improvement no one really notices is the overhead between Blender and Cycles, with 3.0+ that overhead is so tiny I can do 20FPS renders out of Cycles from CLI on my 32 thread CPU and the overhead of Denoising is also much reduced.
Rendering images out of Cycles for my looking glass at 60FPS on GPU is isane (48 angles per frame 250 frames is 12000 renders)
It was a speedup only for high end CPUs and GTX and RTX GPUs. For my lower end Intel i5, I am seeing almost 20-30% speed reduction (what used to render in 6.5 minutes now takes 11 minutes to render with same settings).
I assume you are using adaptive sampling which has different threshold values now so old blend files need their threshold adjusted, for example a scene in 2.93 with 0.01 threshold should be closer to 0.1 in blender 3.0+
I have tried a lot of different settings before, but there’s absolutely no way I could reduce my render times in 3.0. In 2.93 I used branched path tracing (no adaptive sampling) with the minimum samples being 30 and highest being 300 (for transmission), but with noise threshold in 3.0 and 30 minimum and 300 max samples respectively, it’s taking a lot more time to render. Even reducing my max samples to 250 and min to 0 takes 2 minutes longer time to render.
I would request the cycles developers to kindly work on CPU optimisation as soon as possible. I’ve heard that branched path tracing won’t be coming back. That’s OK, but I need a better solution for my CPU.
You could always share your blend so people can see what’s up, until you share comparisons and benchmarks (proof) your words are quite meaningless.
And of course you can’t compare branched pt to just pt, the devs didn’t figure out a replacement to branched before they yeeted it out of the code base there will be plenty of scenes branched could beat cycles x at.
It appears that AMD GPU support for GNU/Linux is in a worse situation compared to other platforms.
Apparently HIP for GNU/Linux will support initially less and more expensive GPU ( Navi 21, 22 ? ), it will be officially limited to few Distro and will arrive later ( in Blender 3.2 ? ).
Not the best situation for who like to support free and open source solutions.
I hope that with the new Intel GPU we will see a first citizen support for GNU/Linux.
Personally i hope for a full free and open source “distro agnostic” solution for GPU renderings in Blender.
Eventually i will focus more on CPU renderings for my next platform, so i hope to see optimizations for the CPU because i do not like to be “forced” to buy a GPU that works only with proprietary drivers.
Hi. Are there any news/plans about AMD cards rendering on windows systems? I’m considering something as optix is for NVIDIA. I mean if there are some plans for the future development of rendering support for AMD users.
If by Optix you mean the use of Hardware Ray tracing then I believe the latest is AMD and Blender have not publicly contributed to the conversation in any meaningful way. I would infer this means that adding hardware ray tracing for AMD GPU’s is the lowest priority. Also worth noting that since the release of RDNA2 in 2020 AMD GPU owners have been waiting for hardware ray tracing however AMD and Blender failed to deliver.
In summary no news I’m aware of yet, and if / when there is news be cautious of any claim that is not implemented and independently tested.
@L_S the link you post about hardware raytracing is related to OpenCL that was slow, unstable and problematic.
AMD GPUs have support already with HIP, I’m sure for AMD adding RT Hardware support is a priority and will be part of the HIP implementation, regarding GPU denoising, I think there will be some improvements with AMD filters implementation that I think will come sooner rather than later.
That question will never be answered there, that’s a forum for development, those kind of questions there tend to be ignored or moderated.
The question can be asked here, but as you see there, the task is present, but so far they are solving HIP support problems from Linux, so until those are solved it’s logical that Hardware Raytracing is not yet there, it will come later, when the initial implementation is properly working
Apart, the person that can shed a bit of light for that is @bsavery , but I think he was off duty for some time due to personal matters.