Blender 2.83.6 GPU - Cuda - Out of memory : 1050ti OK / dual 1070 KO


This is my first post I hope it’s ok to post here :slight_smile:

I’ve got a scene in Cycle GPU and it run out of memory on a machine with a 1070 and a 1070Ti with 8Gb and it render without any trouble on a 1050Ti with only 4Gb…
This render also work on a 1060 with 6Gb of RAM.

My main machine has one 2070Super and a 260Super, it work but very very very slow.

1050Ti : 14 minutes render
1060 : 5 minutes
1080Ti : 3 min 15
1070/1070ti : Cuda out of memory
2070s/2060s : Very very slow then crash.

I think this has something to do with the dual card setup, but out of my logic.
Do I need to setup something to get this scene running with 2 GPU ?

Cycle in GPU mode, machine are setup on CUDA GPU (no CPU check)
All machines on Windows 10
Nvidia drivers 452.06

Same problem with 2 970 GTX Cuda error : out of memory
And no problem with one 970GTX…

Have you tested 2.90 or 91?

1 Like

yes and no

With 2.90 and 2.91alpha same result :
With more recent GPU 1070/1070ti and 2070s/2060s, it’s working (due to the memory size)
But with 2 970GTX, same problem, CUDA error:out of memory.

So something was improved between 2.83 and 2.90, but maybe it could be nice to have this “fix” in the LTS version.