Pass a render result as a numpy array to bpy.types.RenderEngine

I have a problem with the bpy.types.RenderEngine api. I have a PyOpenGL module that generates my render result as numpy array with result.shape(resolution_x,resolution_y,4). I can reshape and change its type almost instantly, but when i give it to the layer.rect = result.
pixels = openGL.brender((self.size_x, self.size_y))
layer.rect = pixels
it takes 3 seconds to write it to the RenderPass collection on average. It depends on the resolution of course, for example:
1920x1080 is 3 sec on average;
640x480 is 0.8 sec on avarage.
My PyOpenGL module executes for less than .2 seconds and I’m losing performance solely because of this one line.
for more technical info on the problem there is the github
Is there any way of cutting the time down?
I’ve tried passing the GLContext to blender on its own but with no success.
I’ve tried getting just a list from my code but glReadPixels() gives only numpy arrays.
Or the last thing that comes to mind is to embed a pygame window into blender but that’s just a long shot idea.

I hit the same issue and worked around it by saving the pixels to an EXR file and loading that with

result.layers[0].load_from_file(FBFILE)

Overall (in my case) it’s faster.

I think there is a related post on this forum with a patch to improve the time taken to pass data from a numpy array. Edit: it’s this thread.

2 Likes

It worked wonders.
It renders sub second now.
Thank you so much.

Good to hear it works for you :smile:

My approach in the LuxCoreRender addon is to pass a pointer to the RenderPass struct to a C++ module and do the copying there.

This should be the fastest possible way there is.

3 Likes

Well, that’s an interesting option as well. Thanks for the details! Did you run into any issues with struct alignment (and relevant compiler options) when using this trick?

No, we’re using this in our addon since months and it has never made any problems.

We’re also using this method for other data reads/writes between Blender and our renderer.
Unfortunately it is not possible to use it in all cases, e.g. reading the coordinates of hair segments, reading/writing image pixels or iterating through depsgraph.object_instances is not possible this way as far as I could find out, because those operations require some internal Blender functions to run before the data access (e.g. aquiring the image buffer).

But for cases where you can get a pointer to some naked array, it works, e.g. mesh data, bgl.Buffer or the render pass.

I have same problem when I writing a render engine, your answer solved the problem. I also try to use prue python code to do it:

src = numpy.frombuffer(self._pixels, numpy.dtype("<e")).astype(numpy.float32)
src = src.ctypes.data_as(ctypes.c_void_p)
render_pass = result.layers[0].passes["Combined"]
dst = render_pass.as_pointer() + 96
dst = ctypes.cast(dst, ctypes.POINTER(ctypes.c_void_p))
ctypes.memmove(dst.contents, src, size * 4 * 4)
1 Like