Slow access to Buffer and GPUTexture in new gpu module

Deprecation of bgl module in switching to Vulcan is great. Current implementation of the gpu module is lacking though. In particular, use of Buffer or GPUTexture outside this module is pretty bad.

I’ll just explain the my use-case to put everything into some context. So previously I used pyopengl library to generate a set of images and put them inside blender. And it was fine because final read of buffer returns numpy array which is easily turned into flat array of pixels to pass to pixels.foreach_set() and push pixels to blender image. With new gpu module and implementation of framebuffer object I decided to switch to it and get rid of peopengl dependency. The problem is that GPUTexture.read() returns a Buffer object which is shaped according to texture width, height and color depth. Passing this buffer to pixels of some blender image is non-trivial. The problem is that you have to flatten this array. And all the methods that I’ve tried a super slow a take at least 2 second for a simple 1024*1024 32-bit image.
The main bottle-neck seems to be Buffer.to_list() method which returns just a list of lists of pixels and takes around a second. But again this list of lists has to be flatten. numpy.ravel or numpy.flatten are very slow for some reason. Generating a list by stepping through all the indicies is also very slow.

So the whole drawing part takes a hundredth of a second while copying results into blender image pixels takes several seconds which is terrible.

So I wonder if there are any plans to integrate gpu module more tightly into blender. Or maybe I’m overlooking some simple solution which you can maybe share

A post was merged into an existing topic: Suggestions / feedback on the extensions for the gpu module