Blender 2.8: "Unable to open a display" by the rendering on the background (Eevee)

If only there was a way to fake a monitor by pluging some kind of device in the gpu port, it would allow Eevee to render at least. How does Eevee choose the GPU? What exactly is telling Eevee: “ok this gpu has a monitor plugged in one of its ports, your can use it to render.” Or “there is no monitor so you are not allowed to render”.? If we knew it we could maybe fake it…

Maybe this could work: https://www.overclock.net/forum/366-folding-home-guides-tutorials/384733-30-second-dummy-plug.html

I found those already made dummy plugs: I will try them ! https://www.amazon.com/FREEGENE-Thunderbolt-Display-Emulator-3840x2160/dp/B0758675CJ/ref=sr_1_2_sspa?s=electronics&ie=UTF8&qid=1547241526&sr=1-2-spons&keywords=GPU+Detection+Monitor+Dummy+Plug+Adapter&psc=1

Hi Strob,

I did not have a time to try it, but you can try xvfb+mesa+llvm/swr.

Milan

@brecht Any updates on headless rendering with EEVEE?

No news. We are in the 2.80 beta phase, not focusing on new features until 2.80 is released.

Rendering via xvfb works, but is super slow!
demo command:
/usr/bin/xvfb-run -n 0 /opt/blender-2.80/blender -b eevee_render.blend -o suzanne.png -f 1

Using blender-2.80-e57ee5934a30-linux-glibc224-x86_64 on a Google VM instance running Ubuntu 18.04 LTS

1 Like

Hi,

Did you try Mesa 18 with llvmpipe and set llvmthreads to maximum?

Milan

Hey Milan,
is this working for you?
I can run eevee via xvfb on an aws gpu instance, but yeah, slow: 50sec while it should be 2sec

Oli

Hi,

I managed to get eevee running and rendering fast on a debian Google Cloud Instance with a GPU (P100), so for AWS I guess should be possible as well

First installed Nvidia drivers, then started Xorg and it worked.

just make sure glxinfo doesn’t give any errors (it checks that OpenGL can work properly)

Hi all,

Just wondering, how related is this question / issue to the one of rendering through ssh connections? I am having some difficulty rendering through ssh connections, even though I have -X forwarding which should mean that I have a display on my remote machine.

I have a minimal working example provided in this StackExchange question. I don’t know if it makes sense to have essentially a duplicate post on devtalk but I’m happy to investigate more at this and to discuss here (or maybe in a new question?) if that makes more sense.

Why use X forwarding for background rendering? If all the rendering happens on the remote GPU nothing that needs to be forwarded to the local machine. If rendering happens on the local GPU there is no point to involve a remote machine.

Thanks for the reply. I suppose I might not be understanding the purpose of X forwarding. The goal here is to get rendering done on that remote GPU. I guess my question can be simplified as follows:

Given this four line python script:

#!/usr/bin/env blender --python
import bpy
bpy.ops.render.render()
bpy.data.images['Render Result'].save_render(filepath='example.png')

How may I successfully run this over an ssh connection to a headless research server that has state of the art NVIDIA GPUs and recent OpenGL versions? By “running this” I mean running the command:

blender --background --python example-script.py

where example-script.py contains just those four lines above.

Hopefully this makes the question clear! Please let me know if I can clarify and/or provide more information (or if it would be better to start a new question).

EDIT: fixed a typo in the script call, it should say --background, not --render as I had earlier.

I don’t know the steps to do it, just that X forwarding should not be needed since all computation should happen on the remote machine.

I would try what @vadimlobanov says above, install and run graphics drivers and Xorg on the remote machine, then see what glxinfo gives and go from there. There is no support yet for headless rendering in Eevee, so it may be necessary to do some configuration that makes it seem like there is a display even if there is no physical display.

I have the same problem if I try to render with cycles… can you confirm it’s both the engines not rendering headless?

1 Like

Hello all, I had the same problem of rendering with EEVEE on amazon aws (or any other cluster). Using regular X forwarding doesn’t work for utilizing remote GPUs. So I searched and found about VirtualGL. Now I can render my scenes with EEVEE on AWS. Since most of the people have similar problems, I wrote down a guide on how I did it with some explanations on my website.
https://yigityakupoglu.home.blog/
I guess you can solve the problem with similar software such as TurboVNC, TigerVNC, etc… I am planning to test the same with TurboVNC and write a similar guide in the future. My email address is in the About section on my website. I am happy to be able to give back to the community, thanks

1 Like

If you’re using NVIDIA GPUs then you should simply be able to fake an attached monitor in the xorg.conf file by using something like

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "UseDisplayDevice" "none"
    Option         "ConnectedMonitor" "DFP-0"
    Option         "CustomEDID" "DFP-0: /etc/X11/dell-3008wfp.bin"
EndSection

Where the file /etc/X11/dell-3008wfp.bin needs to contain a dump of a real monitor’s EDID information. We use this on our GPU nodes to make them think there is a monitor there, no need for hardware plugs.

I don’t understand why you need VirtualGL to render on a remote node. The important things to keep in mind are:

  1. Do NOT use X forwarding (e.g. don’t use ssh -X ..) as that will actually forward the GPU rendering commands to your local machine and do the rendering there, as that is what X forwarding means. Also, X forwarding doesn’t support many of the modern OpenGL extensions that are needed for Blender.

  2. Make sure the remote machine has a working X server and the right OpenGL drivers. The easiest way to verify both of these is to run DISPLAY=:0.0 glxinfo on the remote node. It should return a lot of info, including some lines like

    direct rendering: Yes
    server glx vendor string: NVIDIA Corporation
    server glx version string: 1.4
    ...
    OpenGL vendor string: NVIDIA Corporation
    OpenGL renderer string: Tesla K40m/PCIe/SSE2
    OpenGL core profile version string: 4.6.0 NVIDIA 418.39
    

    This will tell you that the OpenGL implementation is provided by the NVIDIA drivers, while using a GPU - a Tesla K40m in the example above - for rendering (the direct rendering: Yes part is crucial here). The output also tells you that OpenGL 4.6 is available, which is new enough for Blender 2.8 (which needs OpenGL 3.3 or higher).

    If you get lines containing text like Mesa or llvmpipe then the node (in the current configuration) does not provide GPU-based rendering, but software-based OpenGL rendering. In this case the OpenGL drivers might not be correctly installed, the X server might not be configured correctly or the node might not have a GPU at all. It could also mean that the Blender executable is linking to Mesa, instead of the NVIDIA-based OpenGL library (but GLVND should help these days).

    If you get a message like Unable to open a display then either the X server isn’t running, the appropriate DISPLAY value isn’t set or there’s a permissions error accessing the X server.

  3. On Linux with an NVIDIA GPU the only way to get hardware-accelerated OpenGL rendering is currently by going through the X server with GLX. (The exception to this is to use EGL, which can be used to get hardware-accelerated OpenGL rendering without GLX, but Blender doesn’t support EGL, nor the OpenGL ES version it provides).

    So if there’s no X server running on the Linux server node Blender will not be able to use hardware-accelerated OpenGL.

    And VirtualGL and TurboVNC won’t make a difference in this respect. In fact, TurboVNC merely provides an alternative X server that holds the remote desktop, while VirtualGL intercepts certain GLX and OpenGL calls to divert the rendering from the TurboVNC X server to the real X server (which has access to the GPU, as noted above).

  4. EEVEE and Cycles differ in the way they use and need a GPU. EEVEE always uses OpenGL and can’t work without it. But Cycles’ GPU rendering mode is based on CUDA and doesn’t need OpenGL. Therefore, GPU-based Cycles rendering (through CUDA) can be used without have an X server running.

  5. Blender’s -b (or --background) option also makes a difference here. If you don’t use the -b option then Blender will start normally and will try to initialize the GUI, which needs OpenGL and X.

    But if you do use the -b option then the GUI part isn’t started. However, as mentioned in the previous point an EEVEE render always needs OpenGL, regardless of wether -b was used. But a Cycles render can work without OpenGL and X when -b is used, even when doing GPU-based rendering.

This might be quite a bit of (technical) detail, but I hope this is useful for future reference as it seems to be misunderstood quite often.

And to come back to the original issue, for me rendering an image with EEVEE using the -b background option works for me when running it on a Linux node with X and OpenGL correctly installed. I.e. this produces the correct output picture for me:

DISPLAY=:1.0 ~/software/blender-2.80-linux-glibc217-x86_64/blender -b eevee.blend -o //doh -f 1

By the way, a situation where VirtualGL + TurboVNC are really nice is when you want to have remote desktop on a GPU node for running Blender in, including being able to use the GUI and GPU-based rendering. We use that all the time and it works great. But VirtualGL + TurboVNC should not be needed to simply do batch rendering from the command-line.

Edit: edits, added points 3, 4 and 5

2 Likes

By the way, what indications do you have that EEVEE doesn’t support headless rendering? And what is “headless” precisely in this case? If it means “without a monitor attached” then that should be solvable (see my reply on faking monitors above).

Thanks a lot for the detailed answer.
Until now my problem was that I didn’t know I should run the application with DISPLAY=:0.0.
I went over your steps and can confirm that this works, so I was wrong, you don’t have to use Virtual GL.
I will correct the page and convert it to using VGL for GUI Blender.
Thanks

Great, happy to be of help!