Multithreading support (please :) )

Hello everyone,

I really believe that adding multithreading support would open lots of paths to extend and improve current algorithms.

With this topic I would like to understand if there is real interest from users and of course, if the development team has something against it ( and how we can convince them to overcome that something ).

My intention is to use blender for data science visualisation where flow and generators are computed by other languages or programs but having the full freedom to plot/animate it in any shape you like.

This is my second try to define some process that would expose the control of blender via a REST API with no success.

Here is the example code used to test this, the commented line crashes blender with ā€œSegmentation fault (Core dumped)ā€ .

from http.server import HTTPServer, BaseHTTPRequestHandler
import threading
import bpy
from random import randint
 
 
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        print(threading.current_thread().name, "handle get")
        bpy.data.objects['Cube'].location.x += 0.05
        #bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Hello, world!')
    def log_message(self, format, *args):
        return
 
 
class ServerThread(threading.Thread):
     def __init__(self,port):
         super(ServerThread, self).__init__()
         self.port=port
 
     def run(self):
         httpd = HTTPServer(('localhost', self.port), SimpleHTTPRequestHandler)
         httpd.serve_forever()
 
 
 
httpServer = ServerThread(8000)
httpServer.setDaemon(True)
httpServer.start()

Running this in the Scripting window is stable if you only move existing mesh (maybe), but will fail if you add new ones.
Simply do a ā€˜curl locahost:8000ā€™ and should move the ā€œCubeā€ mesh. Uncomment the adding cube command and it will crash

There is a lot to discuss on this topic and would be happy to be part of it.

Any chance convincing someone to start fixing the multithreading issues ?

4 Likes

I donā€™t think you will see a thread-safe Python API in Blender any time soon. The uses for multithreading are probably too specific compared to the effort of making the API thread-safe. Plus there are more pressing improvements keeping the devs busy. Note that Iā€™m not a Blender developer, so Iā€™m making some assumptions above based on my own experiences with making thread-safe code (both Python and C++).

In the short term thereā€™s at least two workarounds you could use to rewrite your API to make it work:

  1. In your code the BPy API methods (like adding a cube) are called from the HTTP handling thread. This will not work as it calls the method from the wrong thread. Instead you could add a main loop to your script (not in a separate thread) that fetches events from a thread-safe queue.Queue and does the actual Blender API calls. Events would be placed in the queue by the current HTTP GET handler, e.g. an event {event: "add_cube", location: (...)}. You would need to enter the event handling loop after the call to httpServer.start(). Plus there needs to be a ā€œdoneā€ event sent from the GET handler so the message loop knows when itā€™s done and can exit.

  2. Another option would be to get rid of the http stuff and write your own single-threaded I/O loop using multiplexing (e.g. select.select() or using the new asyncio module). This is a bit more work and needs HTTP handling code, like https://github.com/njsmith/h11, but you can make the code completely single-threaded. It also ā€œfeelsā€ a bit better than using multi-threaded HTTP handling code that can only be executed in a serial way (as the Blender API is single-threaded).

Just some thoughts, hope it helps in some way

3 Likes

I understand the challenge of multithreading supportā€¦saw it in really old discussions. I am just curious if there would be sufficient demand to raise some interest in this.

Regarding point 1, it is a valid point that I will try it out, but I am not sure how the script can be looping without keeping the main tread frozen ( need to do more research on this )

Can someone suggest how I can create global variables with ā€œunsupportedā€ types.

As I need to terminate ( join() ) the spawned thread I need to save the reference somewhere, so I can access it on script re-run. I tried attaching it to bpy.context but is not allowed

I also would be interested in answer.
According documentation it is not recommended using threading but meanwhile it says:

the subprocess and multiprocess modules can be used with Blender and make use of multiple CPUā€™s too.

https://docs.blender.org/api/current/info_gotcha.html#strange-errors-using-threading-module

The latter remarks in the docs are for cases where you want to perform substantial computation that should not block the main Python thread. For that the two mentioned modules can be useful. But that is a separate usage from the network handling from the start of this discussion, for which you donā€™t really need multi-threading.

You should be able to store global variables on the bpy module directly, I believe, but ā€˜bpy.contextā€™ is most likely off-limits for that.

The sort of conventional way of storing global variables is not in bpy module, but in driver namespace :slight_smile:

The progress so far:

  • any loop in the main thread ( script thread ) will freeze the UI . I was not able to have a long running task in bpy.ops.text.run_sript() without freezing the UI. So you cannot have a thread-safe queue reference created and consumed from that thread.

  • The subprocess module is to split some work in different isolated processes and have IPC using pipes. But it is still bound to the run script thread. It cannot have a long running process in parallel.

  • the multiprocessing module is to have a totally separate process doing work for you but the bpy module from the main blender process is not accessible from the new process. From my simple test it looks like it creates a new blender context. Seems to be the same as having a headless blender instance.

The code:

import multiprocessing
import time
import bpy
from random import randint

def daemon():
    p = multiprocessing.current_process()
    print('Starting:', p.name, p.pid)
    
    try:
        print("Current cube location")
        print(bpy.data.objects['Cube'].location)
        print("Execute move")
        bpy.data.objects['Cube'].location.x += 0.10
        print("After move cube location")
        print(bpy.data.objects['Cube'].location)

        bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))
        print("Adding a new cube as Cube.001")
        for x in range(len(bpy.data.objects)): 
            print( bpy.data.objects[x])
    except Exception as e:
        print("An exception occurred")
        print(e)

    time.sleep(5)
    print('Exiting :', p.name, p.pid)
    
d = multiprocessing.Process(name='daemon', target=daemon)
d.daemon = True
d.start()

@Skarn thanks for the suggestion. I did tried to see if calling such function from the other thread would workā€¦ it did not :slight_smile: ( as expected )

from http.server import HTTPServer, BaseHTTPRequestHandler
import threading
import bpy
from random import randint


def simpleCubeAdd():
    print(threading.current_thread().name, "adding a new cube")
    bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))
bpy.app.driver_namespace["simpleCubeAdd"] = simpleCubeAdd

class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        import bpy
        bpy.app.driver_namespace["simpleCubeAdd"]()
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Hello, world!')
 
class ServerThread(threading.Thread):
     def __init__(self,port):
         super(ServerThread, self).__init__()
         self.port=port
 
     def run(self):
         httpd = HTTPServer(('localhost', self.port), SimpleHTTPRequestHandler)
         httpd.serve_forever()
 
if "server" in bpy.app.driver_namespace.keys():
    if(bpy.app.driver_namespace["server"].isAlive()):
        print("server thread is alive, try stoping it")
        bpy.app.driver_namespace["server"].stop() # TBD
else:
    print("server is not alive, starting now")
    bpy.app.driver_namespace["server"] = ServerThread(8000)
    bpy.app.driver_namespace["server"].setDaemon(True)
    bpy.app.driver_namespace["server"].start()

The fact that I can create new functions type keys in driver_namespace raised the following question:

How about having the http server thread running in C and map the call to the key stored in bpy.app.driver_namespace ???

I assume that would be the only way to have a long running thread accessing the same memory space as the blender process and also allowing fast scripting customization using python scripts.

Any script run from a text block (or using a different trigger like an operator) will indeed need to finish at some point, preferable in a few milliseconds. Otherwise you indeed lockup the UI. However, you can launch the http handler thread once and keep it running, but the handling of incoming events in the queue needs to be repeatedly done, e.g. using a timer, or other mechanism. There is no way around this as the bpy calls to edit the scene need to be made from the same thread as were the UI runs.

Edit: see this example in the docs: https://docs.blender.org/api/current/bpy.app.timers.html#use-a-timer-to-react-to-events-in-another-thread

1 Like

Hi Paul,

Thanks for the link. I did some sort of progress.

I believe there is some fundamental issue with the Python API. need support from a developer

Here is a script that is correctly executing the command of adding a new cube from the main thread:

import threading
import bpy
from random import randint


def addSimpleCube():
    print(threading.current_thread().name, "executing addSimpleCube function")
    bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))
    
addSimpleCube()

But the same process called via processing the queue command is failing :

from http.server import HTTPServer, BaseHTTPRequestHandler
import threading
import bpy
from random import randint
import queue


execution_queue = queue.Queue()

def run_in_main_thread(function):
    print(threading.current_thread().name, "Adding function string to queue")
    execution_queue.put(function)


def addSimpleCube():
    print(threading.current_thread().name, "Executing addSimpleCube function")
    bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))


class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        print(threading.current_thread().name, "handle get")
        run_in_main_thread(addSimpleCube)
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Success!\n')
    def log_message(self, format, *args):
        return
 
 
class ServerThread(threading.Thread):
     def __init__(self,port):
         super(ServerThread, self).__init__()
         self.port=port
 
     def run(self):
         httpd = HTTPServer(('localhost', self.port), SimpleHTTPRequestHandler)
         httpd.serve_forever()
 
httpServer = ServerThread(8000)
httpServer.setDaemon(True)
httpServer.start()


def execute_queued_functions():
    print(threading.current_thread().name, "timer consuming queue")
    while not execution_queue.empty():
        function = execution_queue.get()
        print(threading.current_thread().name, "function found name:", function)
        function()
    return 1.0

bpy.app.timers.register(execute_queued_functions)  

STDOUT

./blender
Warning: Could not find a matching GPU name. Things may not behave as expected.
Detected OpenGL configuration:
Vendor: VMware, Inc.
Renderer: SVGA3D; build: RELEASE;  LLVM;
found bundled python: /home/klusht/opt/blender-2.82a-linux64/2.82/python
Warning: property 'release_confirm' not found in keymap item 'OperatorProperties'
MainThread timer consuming queue
MainThread timer consuming queue
MainThread timer consuming queue
Thread-1 handle get
Thread-1 Adding function string to queue
MainThread timer consuming queue
MainThread function found name: <function addSimpleCube at 0x7fb6ed6dfe60>
MainThread Executing addSimpleCube function
Writing: /tmp/blender.crash.txt
Segmentation fault (core dumped)

cat /tmp/blender.crash.txt

# Blender 2.82 (sub 7), Commit date: 2020-03-12 05:06, Hash 375c7dc4caf4
bpy.ops.text.run_script()  # Operator
bpy.ops.mesh.primitive_cube_add(enter_editmode=False, location=(-5, 3, 0))  # Operator

# backtrace
./blender(BLI_system_backtrace+0x1d) [0x6fbd4ad]
./blender() [0x1658449]
/lib/x86_64-linux-gnu/libc.so.6(+0x46470) [0x7fb71d9fb470]
./blender() [0x19c3756]
./blender() [0x19c6eb4]
./blender() [0x19c76f3]
./blender(mesh_buffer_cache_create_requested+0xd18) [0x19c9d78]
./blender(DRW_mesh_batch_cache_create_requested+0x9bd) [0x19a39ad]
./blender() [0x1953b50]
./blender(DRW_draw_render_loop_ex+0x547) [0x1955737]
./blender(view3d_main_region_draw+0x77) [0x1f2d3b7]
./blender(ED_region_do_draw+0x8f1) [0x1b6ae91]
./blender(wm_draw_update+0x496) [0x181e1a6]
./blender(WM_main+0x30) [0x181c210]
./blender(main+0x317) [0x159ea77]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fb71d9dc1e3]
./blender() [0x1654cfc]

This fails even using the example provided in the linkā€¦

import threading
import bpy
from random import randint


def addSimpleCube():
    print(threading.current_thread().name, "executing add simple cube function")
    bpy.ops.mesh.primitive_cube_add(location=(randint(-10,10),randint(-10,10),randint(-10,10)))


def execute_queued_functions():
    print(threading.current_thread().name, "timer exec")
    addSimpleCube()
    return 1.0


bpy.app.timers.register(execute_queued_functions)

How are you running that script and with which version of Blender? For me the example from the docs works in 2.82.7 adding a cube every second when starting it from a text block. It does not crash.

Reported success too soon :slight_smile: The issue seems to be that the 3D viewport update fails. When I run the script without having a 3D viewport the script works. So does your HTTP handling script, I can see it update the scene in the outliner. However, when I add a 3D viewport I also get a crash.

I am accessing the Scripting window ā€¦ and passing the code in a new text file and click ā€œRun Scriptā€

I have the same version running in a dev VM, and I can see the same behaviour in windows 64 ( both portable versions )

Blender:
===========================================
version: 2.82 (sub 7), branch: master, commit date: 2020-03-12 05:06, hash: 375c7dc4caf4, type: Release
build date: 2020-03-12, 05:30:40
platform: Linux

If possible can you try the one containing the server and simply access localhost:8000 in a browserā€¦ that one should add one cube, as it is the same code, only that the queue is pupulated from a second thread.

This might be a subtle issue in the way operators work in Blender. Their execution depends on the context in which they are called.

That is a really valuable finding.
This narrows down the issue being part of the 3D view, when new objects are added and not necessarily the multithreading access.
From a general understanding, a thread spawned from the main process thread will have access to the same memory location. Segmentation error should not be an issue but rather thread racing conditions.

The following action to move an existing mesh will not crash even when it is called from a parallel thread.

Running this from the same ā€œscriptsā€ view will update the 3D view every second without any issues.
Make sure that you have an mesh called ā€œCubeā€

import threading
import bpy

def move_cube_mesh_on_x():
    print(threading.current_thread().name, "executing move_cube_mesh_on_x function")
    bpy.data.objects['Cube'].location.x += 0.05

def execute_queued_functions():
    print(threading.current_thread().name, "timer exec")
    move_cube_mesh_on_x()
    return 1.0

bpy.app.timers.register(execute_queued_functions)

Soooo ā€¦ this can narrows down that the problem might be on 3D view for new objects added by different process except bpy.ops.text.run_script() operator.

I think it is something to do with adding mesh objects specifically. If I change the addition call to bpy.ops.object.add(location=(randint(-10,10),randint(-10,10),randint(-10,10))) (which adds an empty) it works, even with a 3D view open.

This bug report, especially comment https://developer.blender.org/T62074#632298, is relevant. Passing in the window and screen context seems to solve the crash. I.e. this works for me:

import threading
import bpy
from random import randint

def addSimpleCube(ctx):
    print(threading.current_thread().name, "executing add simple cube function")
   
    bpy.ops.mesh.primitive_cube_add(ctx, location=(randint(-10,10),randint(-10,10),randint(-10,10)))


def execute_queued_functions():
    window = bpy.context.window_manager.windows[0]
    ctx = {'window': window, 'screen': window.screen}  
    
    print(threading.current_thread().name, "timer exec")
    addSimpleCube(ctx)
    return 1.0

bpy.app.timers.register(execute_queued_functions)

For reference hereā€™s an updated script for your HTTP-driven cube addition that works:

from http.server import HTTPServer, BaseHTTPRequestHandler
import threading
import bpy
from random import randint
import queue
from functools import partial


execution_queue = queue.Queue()

def run_in_main_thread(function):
    print(threading.current_thread().name, "Adding function string to queue")
    execution_queue.put(addSimpleCube)


def addSimpleCube(ctx):
    print(threading.current_thread().name, "Executing addSimpleCube function")
    bpy.ops.mesh.primitive_cube_add(ctx, location=(randint(-10,10),randint(-10,10),randint(-10,10)))


class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        print(threading.current_thread().name, "handle get")
        run_in_main_thread(addSimpleCube)
        self.send_response(200)
        self.end_headers()
        self.wfile.write(b'Success!\n')
    def log_message(self, format, *args):
        return
 
 
class ServerThread(threading.Thread):
     def __init__(self,port):
         super(ServerThread, self).__init__()
         self.port=port
 
     def run(self):
         httpd = HTTPServer(('localhost', self.port), SimpleHTTPRequestHandler)
         httpd.serve_forever()
 
httpServer = ServerThread(8000)
httpServer.setDaemon(True)
httpServer.start()


def execute_queued_functions():
    window = bpy.context.window_manager.windows[0]
    ctx = {'window': window, 'screen': window.screen}  
    
    print(threading.current_thread().name, "timer consuming queue")
    while not execution_queue.empty():
        function = execution_queue.get()        
        print(threading.current_thread().name, "function found name:", function)
        function(ctx)
    return 1.0

bpy.app.timers.register(execute_queued_functions)
1 Like

First of allā€¦ Python multithreading has this inherent problem. Itā€™s not a blender API problem.

Iā€™m not sure about writing to blender api from multiple threads but I could see what youā€™re trying to do be achieved using asyncio:

https://docs.python.org/3/library/asyncio.html
Hereā€™s an example of an asyncio web server:
https://docs.aiohttp.org/en/v2.3.5/

Have that running and spawn asyncio ā€œtasksā€ to add data to your blender scene when a request comes in.

1 Like

Niiiice.

I remember reading that the context could not be passed to different threads.
Really happy to see this working.

Thanks @PaulMelis