Technical barriers of separating UI thread from calculations

What are the technical limitations of the current system stopping the separation of UI processing from calculations? I understand that Python drives the UI and triggers some long running calculations such as when initialising rendered viewport or Eevee viewport.

Is it possible to have a system where these long running processes are done in an alternate thread? Is the issue the non-thread-safe nature of Python? I feel like there isn’t any inherent necessity to have for example, the rendered viewport initialisation process blocking the main UI thread so would like to learn about what technical issues exist and what considerations have been made on this topic.

4 Likes

It’s certainly possible, but requires significant code work to ensure the thread running the operator is not writing to data that the main UI thread is reading. Python is not a problem specifically, though the fact that scripts have unrestricted access to any data structures makes things harder.

It’s a matter of working out the design of which parts should be able to access what, verifying the the code follows it, and implementing a system to run operators in threads and interrupt them.

Interesting. So there is nothing which is a complete game-breaker in terms of technology, just that it requires a lot of work to get things to behave.

I guess it is probably less of an issue for those with more powerful machines, but I do feel like it is a good way to get a more ‘snappy’ feeling interface without just reducing the amount of work being done. Scripts having access to any data at any time does make it significantly more complex.

Has a shift to offloading heavy work to alternate threads been considered by developers? If so, was it determined that it added too much complexity for what is gained? Or has it just not been considered much?

We think it’s a useful improvement to add, it just hasn’t risen to the top of the priority list yet.

Logical. Thanks for your time and your work on Blender. :blush:

Blocking wise you have two solution here that easy to use
a) Python threads
b) Modal operators

Python also has the multiprocessing module which could be used to say launch a Blender instance to the background and have that background instance do the heavy lifting calculations.

If moving data between the instances becomes an issue of performance then a common solution is the usage of shared memory which is as fast to access as normal memory , giving top performance and also eliminating the need to duplicate data. Using shared memory is super easy but that does not mean you wont have to modify blender source code to use it.

Of course if the data can be saved in a blend file you could avoid the need for shared memory altogether if you do a long running process like baking. For more short task calculations, in that case you will have to rely more on Modal operators to handle the UI which wont block by default any other operation.

1 Like

If the work to be done is written in Python and can be expressed as a function of the inputs, (and just returns a value without modifying anything else) and the calculation time is comparatively large compared to the size of the data, then it is fairly straightforward to use threads in most technologies I’ve worked worked with, but I haven’t worked with Python in particular. I’ve heard that the async abilities in Python are a bit lacking, though.

Using other blender processes and files seems a bit like a workaround than a proper solution to having a responsive UI, I think the challenges and limitations would start to become apparent quite quickly.

Python is a simple language it basically goes like this

“I can go full dynamic and easy but I will be slow, I could be super fast too but kiss goodbye to my dynamic nature and ease of use. Pick your poison”

I don’t think it exaggeration to claim that Python has most performance orientated libraries than any language out there, including C++. I once even found a Assembly inliner making it possible to go full low level in Python.

A small sample can be found here
https://wiki.python.org/moin/ParallelProcessing

but that’s not all, for example there is no mention of PyCUDA etc. Probably what you heard was about the async module which comes included with Python , there are so many libraries out there makes it kinda hard not to find what you looking for. Even wrapping your own C code for Python is actually pretty easy too.

This is how I do my commercial project, I made a plugin system that allows me to write C code without having to rebuild Blender(its also a fun way to make compiler go from 15 seconds to 2 seconds) and have my C functions automatically wrapped for Python. I do drawing in C (using Blender internal functions) and event handling in Python and so far it has been blazing fast. When I want performance I drop to C, for the rest I use Python.

This is why Python replaced C++ as AI language. They just wrapped their C++ libraries for Python. So yes you can definetly run Python on multiple cores, embeded devices and GPUs, I do not think there is a thing that Python cannot do. Problem is the more you optimize code the more it feels like coding in C which kinda defeats the purpose of using Python in the first place. Which is why C and C++ are not going away any time soon.

3 Likes