Questions regarding the use of bpy and CUDA

System Information:

  • Operating system: Ubuntu 22.04.3 LTS
  • Graphics card: GeForce RTX 4070 Ti SUPER
  • Driver: NVIDIA-SMI 550.54.15
  • CUDA Version: 12.4
  • python==3.11.9
  • bpy== 4.1.0
  • celery==5.3.4


I am developing a microservice for rendering 3D models. My service operates in conjunction with Celery, running in eventlen mode. It functions within a Docker container where one Celery worker is running. There are 2 such containers running on one server that are engaged in rendering. Rendering takes place on a graphics card using CUDA cores. The average rendering time is an hour and a half.

The questions:

  1. I am wondering if there is a way to stop a rendering process initiated through bpy from C code? I plan to halt the process from handlers. Is there another way to achieve this?
  2. Sometimes I encounter the error Error: Failed to create CUDA context (Not permitted) After studying the source code, I suspect this is related to a CUDA error 800, arising from the cudaLaunchCooperativeKernel kernel. Is my assumption correct? And can it be stated that multiple renders cannot be run on a single machine, even if they are in different Docker containers?

It seems to me that it is impossible to run multiple containers with bp on the same machine, because it is possible to simultaneously call the video card. I also want to find out in more detail what kind of core the gpu uses when rendering on CUDA, honestly I don’t understand the C programming language

1 Like

I’ll ping here @brecht as one of Cycles devs/

@brecht please, i need your help

I don’t think there is any mechanism to stop renders from a handler currently.

The not permitted error I’m not familiar with. On typical machines it is possible to run multiple renders at the same time. I guess there is something specific about Docker or something else about this server configuration. But I don’t have a good guess.

1 Like