Cycles 8xGPU slower than 4xGPU

Hi everyone,

I have the following problem. Whenever I render a scene using Cycles GPU I get a very slow render time on my new recently mounted renderSlave than on my own workstation.

Workstation specs:
i7 7700k
16 ram
4x gtx1080i
render time 40 mins for 25 frames

slave:
xeon e52630v4
64 ram
8x gtx 1080i
render time 6h 40 min for the same 25 frames

I am using the same configuration of both systems, I just do not understand why does this happen. Any ideas would help and are very welcome.

Cheers,
Tino

Hi, many things can happen here.
What software do you use to render over LAN?
Can your slave power supply serve > 2000 Watt?
Are you test the same 25 frames or include the slave frames volumetrics, for example?
Check BF Blender benchmark on each system:

Just guessing here, cheers, mib

i am using deadline, it is free for 2 render nodes :wink:
As I said everything in the scene is the same. Also the rendersettings are the same.
The slave hace 2000 watt redundant, which meas I should be way higher than 2000

I ll give it a try with the benchmark, thanks for the tip

blender benchmark does not work. for some reason it says my openGL is under 3.3, which makes no sense since, the nvidia driver should update this, and the las update i made was 411.63.

or am I missing something?

Cheers

I don’t think the benchmark will help debug this.

Have you tried:

  • Manually logging into the machine, opening Blender, configuring all GPUs to be used, and rendering.
  • Verified that the GPUs are actually doing work (using e.g. https://www.techpowerup.com/gpuz/)
  • Tested if a single GPU works, or if it’s an issue specifically with multiple GPUs.
  • Checking if deadline is configured correctly to use these GPUs.

I am tryiing to open Blender, but for some weird reason I do not have my OpenGL uptodate. I say weird, because I am using the nvidia driver 411.63 on my slave too and it comes with OpenGL4.6

Are these NVIDIA GPUs actually used to drive the display though? Maybe it’s an integrated Intel card, or if you are remotely connecting some kind of virtual graphics driver.

I actually dont have a display conected to the slave. I only use it for renders.

And I think all GPU are working. I just ran the octane bench, to see if all graphic cards are running on the slave, and indeed. I became also a very good result. So I think all GPUs are running. Does cycles has a limit on GPUs? example redshift supports only 8 and Octane supports a max of 10

I have also checked the logs, and on the slave (which should be faster) the synchronizing object, the update bvh and all of this, also took longer.

There is no limit on the number of GPUs in Cycles. But note that Cycles will not use GPUs automatically, you have to set them in the user preferences or the render farm needs to enable them with a script.

The synchronization stage is single threaded, and the i7 has a higher clock speed so it would be expected to be slower on the Xeon. BVH build is multithreaded though. Some things like loading images depend a lot on the hard drive or network drive.

Please also check it isn’t a deadline problem. They seem to have some OS and environment specific limitations, and I have a feeling they upload the scene as it is in memory when rendering, which means for large scenes (detailed adaptive subdiv for example) on slow networks, you might expect some very slow startup speeds. I’m not sure if this is your issue but if you don’t find anything about the systems or Blender, you might want to check there.

Hi guys,

Thanks for the help.
@brecht can you share a link, where i can have a look at that script

@smilebags i dont think it is a desdline issue. I was using before the same machine as my workstation as a slave. The new toy is making issues today while running tests. And with the old slave was everything ok

I don’t know how deadline works, but for example a script like this could be run in Blender at the start of every render to ensure CUDA is enabled.

import bpy

prefs = bpy.context.user_preferences
cprefs = prefs.addons['cycles'].preferences
cprefs.compute_device_type = 'CUDA'

scene = bpy.context.scene
scene.cycles.device = 'GPU'

Alternatively you could copy the Blender configuration from one computer to the other, which is in:

C:\Users\Me\AppData\Roaming\Blender Foundation

Is it a python script? Will that also enable al GPUs? I can run python scripts via deadline before I send the render job, that wont be an issue

I‘ll try this later. The to finish a couple of projects. Will let you know if it works

It’s Python script that you would need to run as part of the Blender command that does the render, for example blender -b -P enable_cuda.py test.blend -f 1.

hi, I made further investigation and according to the gpu shark I can see that none of the GPU has any usage percentage. If I enable this cuda via that pythos script. is it possible that all cuda devices get also checked?

All the CUDA devices are already enabled by default once you enable CUDA.

You could enable them manually, but it shouldn’t be needed.

cuda_devices, opencl_devices = cprefs.get_devices()
for device in cuda_devices:
    device.use = device.type == 'CUDA'

hi brecht,

I struggling sending the python script via deadline, but I will figure this out later. I just noted why is it rendering so slow. It is actually using the cpu instead of the gpu. Do you think it will render with the gpu using the python script?

Yes, that’s what the Python script is for.

But you can also copy over your user preferences to the render computer, if you want to avoid using a script.

thanks, I went in blender and resaved the userprefs. that helped. Thanks a lot… now ultra speed

1 Like

give us some benchmarks to satisfy our curiosity