Earlier this year I moved to 2.9 (6/8/20 build) as it was pretty stable (mostly) for what I need it for. This build had CUDA support and I have been using it with my MacPro w/ 2 X GTX Titans and it is pretty fast - see full specs below. I remember reading somewhere that CUDA support was going to stop with 2.9 - sigh.
I recently downloaded a newer build of 2.9, and yes CUDA is gone. That instantly makes this system useless in Blender. So question: is the lack of CUDA gonna be a permanent thing with Blender/Cycles on Mac? I found somewhere in the fine print that it was caused by not having CUDA libraries that are compatible with new Blender code - does anybody know if this will change?
I am not a big computer tech-head, but I think this systems is topped out GPU-wise. But this MacPro runs as quick as one of my Win10 systems that has a RTX 2070.
Specs: MacPro 2010, 2 X 3.3 XEON 6core/12 thread, 2 X 6gb GTX Titan, 64gb, High Sierra, etc
I know I can always get a new computer and/or switch to WIN10, but there are other reasons I wouldnāt want to.
AFAIK Apple stop support of CUDA inside macOS several year ago and Nvidia with cuda 10.x have deprecated support for CUDA on macOS, with CUDA 11 (probable adoption of it in Blender 2.92-2.83) macOS support will be completely removed. AFAIK only hope than X86-64 macOS user can have nowaday to use hardware acceleration on macOS is porting of Blender (Cycle render engine and a lot of other UI code) from OpenGL to Vulkan. Vulkan already can use macOS Metal graphic layer with one other intermediate layer software: MoltenVK. I can make some consideration, Blender will be ready with Vulkan/Molten/Metal when Apple will not sell anymore x86-64 hardware, Blender as other software house will speedup conversion of Blender for ARM hardware, so support for x86-64 will be lowered a bit, Blender have one limited number of developer always full of work, limited resource should be used with well defined priorities. I hope Blender have plan well defined to add new feature (AR/VR support, Assets Manager, new Physics and Simulations with Nodes, Interactive Mode,etc.) and maintain support for macOS near 6-10% of all Blender user (i am Linux user part of under of 3%).
as nor related with Blender and not developer, i think i will correct from some more informed Blender developers
Just adding my voice to the hope that 2.90 series might retain CUDA for Mac support. Although Apple doesnāt support it, my system certainly does as must that of many users, and I rely on using it every day in production in my slowly growing animation business. I have deliberately built a system that made CUDA possible on my Mac Pro 2012, and maintained it deliberately on High Sierra so that I could continue using it this way. I appreciate the difficulties of managing Mac compatibility, but one of the things Iāve always loved with Blender was its flexibility and not being forced into major hardware changes which are cost prohibitive for me most of the time. I would be so grateful if CUDA compatibility could be kept at least for the 2.9 series or until a working alternative (Vulcan? Metal?) were in place. I was even being optimistic about Optix support (my Titan X card is Maxwell arcitechture), until this support drop was announced. Please help!
Sometime can be fun read this, i not want be irrespectful but Apple have 1000 developer each Blender developer, i can suppose that Nvidia have more or less 200-500 developer each Blender develper, how one person think that one crowdfunded opensource (one of most bigger) project can solve problems for Apple and Nvidia (both have multi billion income)?
Nvidia had annunced that CUDA with version 11 of SDK will drop Apple macOS support (this was annunced several years ago) and macOS drop support for each library need to Blender to work a 100% as in others OS before announce of Nvidia. I think these rant on Blender related media (here related to developer an development stuff) are made in wrong place. Apple user should make these complaint in Apple and Nvidia as AMD blog/forum/socialmedia/ā¦ with bigger number of petitioneer or for protest change hardware platform and OS
i still Linux use, i still not relatded with Blender and i still not one developer and i still to wait for one more informed reply from some Blender developer
Ditto that. I have 3 2010 Mac Pro towers all loaded with dual GTX cards and 6 core 3.46 Xeons specifically for blender, and like the poster above Iāve parked them at High Sierra just for render purposes. They out render my windows box with an RTX in it by about 30%, at less than half the cost fully equipped. I sure as hell am not thrilled to have to replace 3 perfectly good towers just because someone decided for me not to keep Cuda enabled in 2.9. What was it hurting?
@LewnWorx You donāt need to replace expensive and already paid hardware, you need ānotā to use macOS on these hardware, end of story. You funded a brand that ādoesnātā want you to use free-opensource programs, this ādoesnātā depend on Blender. Linux and Windows make your valuable hardware work 100% if Apple allows you to install/use these alternative operating systems
Afaik CUDA support wasnāt dropped ājust becauseā. Blender uses a newer CUDA version because it needs it for some new features. But that newer CUDA version is not supported by Apple (because they want to push their own Metal ) and nVidia (because they got sick of working around apple I guess).
Itās a pity, but it is mainly apple that is to blame. You canāt expect Blender to not innovate and use newer CUDA versions because Apple decides they want to be difficult.
Letās hope it gets sorted out once everything is ported to Vulkan. With Vulkan still Apple doesnāt really cooperate, but at least thereās a translation layer possible.
Hi @LewnWorx !
I have a similar setup and, since Apple decided to stop allowing Nvidia GPUs to work on their computers, I am working with LINUX.
I have a UBUNTU partition on my Macpro 2010 and it works great. In fact Blender is noticeably faster on Linux ā¦
That was the only thing what kept me on High Sierra. I hope there will be a metal workaround soon. I just canāt imagine my workflow on Windows anymore.
That doesnāt sound correct. The CUDA driver did not need to be maintained by Apple, itās not developed by them at all, and it was working fine on Mojave from what I heard. There was just a political decision not to allow it.
Rejoice, getting GPU support for 2.9 is a matter of fixing one line in the cuda device invocation of cycles, and a few lines in the NanoVDB.h include file used at runtime, and then building. Iām on an aging hackintosh running High Sierra (10.13.6), with XCode 10.1, WebDrives 387.10.10.10.40.132, a GTX 1050 Ti, and Iām finally back with gpu-accelerated Cycle render in the live view using 2.92 (commit fa82a15676eb37a7e73d1e3a0e8095684842376d)!!!
Iāll put the detailed instructions in a site I can link to from here as Iām getting a ānew user canāt post more than 2 linksā error trying to post (gpu: success, form post: failā¦).
Good luck and let me know how it goes. Worse comes to worse I can always put out the binary, but I have no idea how many dynamic libs it is dependant on.
Oh boy, this is proving to be more challenging than I anticipated This is my first time ever trying to make my own blender build, and my lack of coding & command line knowledge isnāt helping either, haha.
Seems like Iām having issues around every corner - I couldnāt get svn or cmake installed via homebrew on High Sierra (just getting āno bottle availableā errors), so I switched to my laptop with Mojave where I got those installed properly. But now Iām running into issues related to versions of SDK & command line toolsā¦ at least I think thatās what the error messages are saying? I get a whole series of errors similar to this, just with different numbers each time:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Carbon.framework/Headers/Carbon.h:67:
/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/Carbon.framework/Frameworks/Ink.framework/Headers/Ink.h:200:51: error: expected ā;ā after top level declarator
typedef struct OpaqueInkTextRef* InkTextRef DEPRECATED_IN_MAC_OS_X_VERSION_10_14_AND_LATER;
I searched for solutions for this error, people said to try reinstalling the command line tools. Tried that once or twice and it didnāt make a difference unfortunately. At the end it generates the Blender.app file but itās 1KB in size & is obviously incomplete. Iām gonna keep trying different things to get it working, it would be really satisfying to learn how to do this. If you have any ideas or suggestions for me to try Iād really appreciate the help.
I donāt remember how I got svn / cmake on my High Sierra machine; I did have a āno bottle availableā recently with homebrew so Iām afraid the support for High Sierra is gone 8-( And compiling on Mojave and using on High Sierra is most likely to fail.
So Iād recommend building cmake & svn from source (you should have everything required within Xcode) to move forward, or I upload my binary somewhere for you to pick up.
Iām gonna try that now. Didnāt want to do that before since there was a warning about the possibility of build failures. Iāll report back soon & let you know if I get anywhere!
I finally got a working Blender build! Thanks for the advice about building those from source.
Once I got the default build working, I followed your guide, applied the fixes and recompiled. Iām able to select GPU Compute as the render device, but when I try to render, the program crashes. Thereās also no Cycles Render Devices menu under the System tab in preferences, do you have that in your build?
Maybe Iām having issues because I have a multi GPU setup - a 1060 and a 980 Ti, which have different CUDA compute levels. Although I do have a secondary High Sierra machine with a single GTX 680, I could try recompiling for its CUDA compute level and see if that works.
You definitely need to set the right Compute level when building Blender/Cycles; you can do it from the command line (as per my instruction) or set it up in the CMakefile (the default is to build for all known Compute levels).
I donāt have anything listed in the Render Engine:Device/Device pop-up menu. The Add-on Preferences/Cycles setup is working though, my gpu + my cpu show up in the CUDA tab, and I can enable/disable them; donāt know if thatās a 2.9 feature or a problem with the card detection from within the UI.
I havenāt tested much Cycles rendering, for me the Viewport Shading mode was the most important; I do rendering on headless Linux machines (either here or with p2 instances on AWS). With OpenImageDenoise option enabled in Denoising/Viewport, the Viewport Shading mode is almost at Eevee level performance. [flash forward] I just tested a few scenes and one is definitely crashing on a āRender Imageā with Cycles with both gpu+card enabled (but fine otherwise). I donāt have any info on whether thatās a 2.92 bug or a CUDA issue. The Viewport Shading works for all gpu/cpu combinations for all scenes.