Blender 2.9 CUDA on Mac

Hello all:

Earlier this year I moved to 2.9 (6/8/20 build) as it was pretty stable (mostly) for what I need it for. This build had CUDA support and I have been using it with my MacPro w/ 2 X GTX Titans and it is pretty fast - see full specs below. I remember reading somewhere that CUDA support was going to stop with 2.9 - sigh.

I recently downloaded a newer build of 2.9, and yes CUDA is gone. That instantly makes this system useless in Blender. So question: is the lack of CUDA gonna be a permanent thing with Blender/Cycles on Mac? I found somewhere in the fine print that it was caused by not having CUDA libraries that are compatible with new Blender code - does anybody know if this will change?

I am not a big computer tech-head, but I think this systems is topped out GPU-wise. But this MacPro runs as quick as one of my Win10 systems that has a RTX 2070.

Specs: MacPro 2010, 2 X 3.3 XEON 6core/12 thread, 2 X 6gb GTX Titan, 64gb, High Sierra, etc

I know I can always get a new computer and/or switch to WIN10, but there are other reasons I wouldn’t want to.

Thanks in advance,



AFAIK Apple stop support of CUDA inside macOS several year ago and Nvidia with cuda 10.x have deprecated support for CUDA on macOS, with CUDA 11 (probable adoption of it in Blender 2.92-2.83) macOS support will be completely removed. AFAIK only hope than X86-64 macOS user can have nowaday to use hardware acceleration on macOS is porting of Blender (Cycle render engine and a lot of other UI code) from OpenGL to Vulkan. Vulkan already can use macOS Metal graphic layer with one other intermediate layer software: MoltenVK. I can make some consideration, Blender will be ready with Vulkan/Molten/Metal when Apple will not sell anymore x86-64 hardware, Blender as other software house will speedup conversion of Blender for ARM hardware, so support for x86-64 will be lowered a bit, Blender have one limited number of developer always full of work, limited resource should be used with well defined priorities. I hope Blender have plan well defined to add new feature (AR/VR support, Assets Manager, new Physics and Simulations with Nodes, Interactive Mode,etc.) and maintain support for macOS near 6-10% of all Blender user (i am Linux user part of under of 3%).

as nor related with Blender and not developer, i think i will correct from some more informed Blender developers


Just adding my voice to the hope that 2.90 series might retain CUDA for Mac support. Although Apple doesn’t support it, my system certainly does as must that of many users, and I rely on using it every day in production in my slowly growing animation business. I have deliberately built a system that made CUDA possible on my Mac Pro 2012, and maintained it deliberately on High Sierra so that I could continue using it this way. I appreciate the difficulties of managing Mac compatibility, but one of the things I’ve always loved with Blender was its flexibility and not being forced into major hardware changes which are cost prohibitive for me most of the time. I would be so grateful if CUDA compatibility could be kept at least for the 2.9 series or until a working alternative (Vulcan? Metal?) were in place. I was even being optimistic about Optix support (my Titan X card is Maxwell arcitechture), until this support drop was announced. Please help!


Sometime can be fun read this, i not want be irrespectful but Apple have 1000 developer each Blender developer, i can suppose that Nvidia have more or less 200-500 developer each Blender develper, how one person think that one crowdfunded opensource (one of most bigger) project can solve problems for Apple and Nvidia (both have multi billion income)?
Nvidia had annunced that CUDA with version 11 of SDK will drop Apple macOS support (this was annunced several years ago) and macOS drop support for each library need to Blender to work a 100% as in others OS before announce of Nvidia. I think these rant on Blender related media (here related to developer an development stuff) are made in wrong place. Apple user should make these complaint in Apple and Nvidia as AMD blog/forum/socialmedia/… with bigger number of petitioneer or for protest change hardware platform and OS

i still Linux use, i still not relatded with Blender and i still not one developer and i still to wait for one more informed reply from some Blender developer



Ditto that. I have 3 2010 Mac Pro towers all loaded with dual GTX cards and 6 core 3.46 Xeons specifically for blender, and like the poster above I’ve parked them at High Sierra just for render purposes. They out render my windows box with an RTX in it by about 30%, at less than half the cost fully equipped. I sure as hell am not thrilled to have to replace 3 perfectly good towers just because someone decided for me not to keep Cuda enabled in 2.9. What was it hurting?

1 Like

@LewnWorx You don’t need to replace expensive and already paid hardware, you need ‘not’ to use macOS on these hardware, end of story. You funded a brand that ‘doesn’t’ want you to use free-opensource programs, this ‘doesn’t’ depend on Blender. Linux and Windows make your valuable hardware work 100% if Apple allows you to install/use these alternative operating systems

1 Like

Afaik CUDA support wasn’t dropped ‘just because’. Blender uses a newer CUDA version because it needs it for some new features. But that newer CUDA version is not supported by Apple (because they want to push their own Metal ) and nVidia (because they got sick of working around apple I guess).

It’s a pity, but it is mainly apple that is to blame. You can’t expect Blender to not innovate and use newer CUDA versions because Apple decides they want to be difficult.

Let’s hope it gets sorted out once everything is ported to Vulkan. With Vulkan still Apple doesn’t really cooperate, but at least there’s a translation layer possible.


Hi @LewnWorx !
I have a similar setup and, since Apple decided to stop allowing Nvidia GPUs to work on their computers, I am working with LINUX.
I have a UBUNTU partition on my Macpro 2010 and it works great. In fact Blender is noticeably faster on Linux …


That was the only thing what kept me on High Sierra. I hope there will be a metal workaround soon. I just can’t imagine my workflow on Windows anymore.


That doesn’t sound correct. The CUDA driver did not need to be maintained by Apple, it’s not developed by them at all, and it was working fine on Mojave from what I heard. There was just a political decision not to allow it.

I have the same error on windows. it cause by latest nvidia driver . i had to back to older driver and problem solved.

What I’m saying is that this is false, I have no idea what you make that statement based on.

This post was flagged by the community and is temporarily hidden.

Rejoice, getting GPU support for 2.9 is a matter of fixing one line in the cuda device invocation of cycles, and a few lines in the NanoVDB.h include file used at runtime, and then building. I’m on an aging hackintosh running High Sierra (10.13.6), with XCode 10.1, WebDrives 387., a GTX 1050 Ti, and I’m finally back with gpu-accelerated Cycle render in the live view using 2.92 (commit fa82a15676eb37a7e73d1e3a0e8095684842376d)!!!

I’ll put the detailed instructions in a site I can link to from here as I’m getting a “new user can’t post more than 2 links” error trying to post (gpu: success, form post: fail…).

Update: the detailed instructions are at


Oooh this is really exciting! I’m gonna have to give this a shot soon. Thanks for posting the guide!

Good luck and let me know how it goes. Worse comes to worse I can always put out the binary, but I have no idea how many dynamic libs it is dependant on.

Oh boy, this is proving to be more challenging than I anticipated :grimacing: This is my first time ever trying to make my own blender build, and my lack of coding & command line knowledge isn’t helping either, haha.

Seems like I’m having issues around every corner - I couldn’t get svn or cmake installed via homebrew on High Sierra (just getting “no bottle available” errors), so I switched to my laptop with Mojave where I got those installed properly. But now I’m running into issues related to versions of SDK & command line tools… at least I think that’s what the error messages are saying? I get a whole series of errors similar to this, just with different numbers each time:

In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Carbon.framework/Headers/Carbon.h:67:
/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Carbon.framework/Frameworks/Ink.framework/Headers/Ink.h:200:51: error: expected ‘;’ after top level declarator
typedef struct OpaqueInkTextRef* InkTextRef DEPRECATED_IN_MAC_OS_X_VERSION_10_14_AND_LATER;

I searched for solutions for this error, people said to try reinstalling the command line tools. Tried that once or twice and it didn’t make a difference unfortunately. At the end it generates the file but it’s 1KB in size & is obviously incomplete. I’m gonna keep trying different things to get it working, it would be really satisfying to learn how to do this. If you have any ideas or suggestions for me to try I’d really appreciate the help.

I don’t remember how I got svn / cmake on my High Sierra machine; I did have a “no bottle available” recently with homebrew so I’m afraid the support for High Sierra is gone 8-( And compiling on Mojave and using on High Sierra is most likely to fail.
So I’d recommend building cmake & svn from source (you should have everything required within Xcode) to move forward, or I upload my binary somewhere for you to pick up.

I’m gonna try that now. Didn’t want to do that before since there was a warning about the possibility of build failures. I’ll report back soon & let you know if I get anywhere!

I finally got a working Blender build! Thanks for the advice about building those from source.

Once I got the default build working, I followed your guide, applied the fixes and recompiled. I’m able to select GPU Compute as the render device, but when I try to render, the program crashes. There’s also no Cycles Render Devices menu under the System tab in preferences, do you have that in your build?

Maybe I’m having issues because I have a multi GPU setup - a 1060 and a 980 Ti, which have different CUDA compute levels. Although I do have a secondary High Sierra machine with a single GTX 680, I could try recompiling for its CUDA compute level and see if that works.

1 Like