Setting SM_* vs compute_* for CYCLES_CUDA_BINARIES_ARCH

When running cmake to configure the settings to build Blender, there is a setting called CYCLES_CUDA_BINARIES_ARCH that allows you to only build kernels for your own card to speed up build time.

I have an RTX 3090 as an example which uses CUDA architecture 8.6 according to NVidia. I currently have my Cmake setting as:

CYCLES_CUDA_BINARIES_ARCH=sm_86;compute_86

The default was CYCLES_CUDA_BINARIES_ARCH=sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75

I did this because I saw that the 75 architecture had a shader model and a compute reference. I can’t find any documentation on what this should be for the 3000 series cards though. Am I correct to leave SM and COMPUTE in there, or is there something bad with having both there?

The “compute_75” “architecture” was added in this commit: rBa9644c812fc1

You can read the description of the commit to find out more technical information.

Basically, the “compute” binary is an extra binary that can be re-purposed by the GPU driver to let unreleased GPUs use CUDA rendering in Blender Cycles without having to recompile Blender Cycles with an updated version of CUDA.

You can safely remove “compute_75” and “compute_86” from “CYCLES_CUDA_BINARIES_ARCH” to save disk space and compilation time. However, if Nvidia releases a new GPU architecture and you buy one of these GPUs near release, you might need to re-enable “compute_75” or “compute_86” while you wait for Blender to take advantage of a new version of CUDA that supports the new architectures. Or you can experiment and install the new version of CUDA and see in Blender will compile with it for the new GPUs without any code changes

1 Like

Thanks for the reply. I had seen that commit, but what confused me is that compute_75 is still in the default config, but compute_86 was not added, which does not seem consistent.

But re-reading the commit message, I guess it makes sense, as sm_86 covers the latest and greatest and the compute_75 is considered for “downlevel” cards. Thanks, and I’ll keep my local build to just “sm_86” from now on.

Hi, you don´t have to build Cuda kernels during compiling Blender all the time, the kernel is build “On the fly” if you start a render. If nothing change no need to build the kernel.

option(WITH_CUDA_DYNLOAD "Dynamically load CUDA libraries at runtime" ON)
mark_as_advanced(WITH_CUDA_DYNLOAD)

It is already enabled to build the kernel dynamically.
Maybe you give it a try.

Cheers, mib

Just a heads up here:

While supported on windows, the environmental conditions for on demand compilation (finding both a compatible cuda in the path and running from a compatible VS developer prompt) to work are essentially never right, not saying it doesn’t work, it’s just very unlikely it will.

1 Like