Integrating border render into full image

Hi,
I’m working on integrating appleseed into Blender. Probably best to describe what I’m trying to do.

When appleseed starts working on a tile, it sends its location to blender. That location data is used to manually draw a highlight around that tile region. When the tile finishes, it’s pixel data overwrites the highlight markings, and all is well.

When the render is cancelled, the incomplete tiles are still highlighted, which obviously looks wrong. We had created a parallel image buffer array in Python that stores the finished pixel data indexed by x/y location, and if the render is cancelled this buffer is loaded into the image viewer (effectively erasing the tile marks). This is done through a single begin_result call set to the image dimensions that is triggered by the test_break condition. It works great on full image renders. However when border rendering is enabled it falls apart because I’m not sure how to format the x/y location data of the secondary buffer or how to properly call the begin_results function when border rendering is enabled.

For example lets say I have a 1920x1080 image with a border region of 256x256

I’ve tried keeping our image buffer at the full 1920x1080 resolution and loading the pixels from the border region into their equivalent position in the full image. I then tried to load the full sized buffer (just like we do in a border-less render). That failed. I also tried creating the buffer at 256x256, then loaded the pixels into their absolute positions there, then attempted to put that buffer in the right location of the full sized image during the begin_result call. That failed too.

Any advice on how I should go about this? Like I said our method works great for full sized renders, it’s just the border region renders that have issues.

Of course if there’s a better way to highlight tiles (like one built into Blender) I’m all ears. I tried experimenting with the ‘use_highlighted_tiles’ boolean in the RenderEngine type but there’s no documentation on what that does or how to use it, so I’m not even sure if it’s relevant.

Thanks!

I think it’s better to use use_highlighted_tiles, anything else is probably working against the design. The API is a bit weird, I’m not sure why it was done like that. But something like this should work:

# at the start of rendering
engine.use_highlight_tiles = True;

# highlight tile when starting to render it
rr = engine.begin_result(..)
engine.end_result(rr, cancel=True, highlight=True)

# render actual tile
rr = engine.begin_result(..)
.. set pixel data ..
engine.end_result(rr)

Hi Brecht. Thanks for the reply.

I tried your suggestion and nothing happened. The tiles appeared when they were finished being rendered as usual but there was no highlighting when they started. I tried placing the initial use_highlight_tiles in the class level, the render function and the function that actually does the rendering. No go on all three placements.

If it helps here’s the link to the rendering module (https://github.com/appleseedhq/blenderseed/blob/master/render.py)

We’re not using the ‘engine’ designation but it’s still calling bpy.types.RenderEngine to do all the lifting.

Here’s an example. It looks like it only works when the tile size and placement matches the Blender settings though, because it’s tied to the same system used for writing EXR to disk which requires a specific tile layout.


import bpy
import time

class CustomRenderEngine(bpy.types.RenderEngine):
    bl_idname = "test_renderer"
    bl_label = "Test Renderer"
    bl_use_preview = True

    def render(self, scene):
        self.size_x = scene.render.tile_x
        self.size_y = scene.render.tile_y

        self.use_highlight_tiles = True

        pixel_count = self.size_x * self.size_y
        pixels = [[0.9, 0.8, 0.0, 1.0]] * pixel_count

        result = self.begin_result(0, 0, self.size_x, self.size_y)
        self.end_result(result, highlight=True, cancel=True)

        time.sleep(1.0)

        result = self.begin_result(0, 0, self.size_x, self.size_y)
        layer = result.layers[0].passes["Combined"]
        layer.rect = pixels
        self.end_result(result)


def register():
    bpy.utils.register_class(CustomRenderEngine)

def unregister():
    bpy.utils.unregister_class(CustomRenderEngine)

if __name__ == "__main__":
    register()

So basically we can’t use it because Blender has no control over where tiles are coming in, right?

So how does cycles handle tile highlighting and what does it do when a render is canceled?

Cycles uses this system, tiles have the same x/y/w/h coordinates as Blender expects. It’s just what you get when you make a grid of tiles with size scene.render.tile_x and scene.render.tile_y, starting from the bottom left corner. The order doesn’t matter.

Anyway, if you want to use the existing system you have, is there a specific reason you need to use a single begin_result call instead of one per tile as you are already doing for writing the result without cancelling?

For border render, as far as I know you should consider it as if you’re rendering an image with a smaller size with a different camera projection matrix. Depending on the border Crop option Blender can copy the result into a bigger image after rendering, or not, but this should not affect the render engine.

Hi Brecht,
So if we used the scene.render.tile_x and tile_y parameters instead of our own, that might make it work?

The reason for the single begin_result call is mainly one of simplicity. The array is initialized at the start of rendering and completed pixels overwrite the initial values as they come in. If rendering is cancelled it seemed easier to load the buffer in one operation rather than try to create another tiling process.

That’s why I 'd really like to get the built in highlighting system working for us. I feel like I’m trying to force a round peg into a square hole when a better solution exists, I just haven’t quite figured out how to use it yet.

So here’s a theory: if scene.render.tile_x (and tile_y) are set to 64 and appleseed renders in 64x64 tiles as well, then the system should work, regardless of exactly how appleseed progresses through those tiles (Hilbert, Spiral, Linear, etc…)?

BTW I appreciate the help

Edit: one other thing. So when the render starts blender splits the image result into a grid based on the tile_x and y size and for highlighting to work it has to do a begin_result on the same places that blender thinks those tiles are? Sorry I’m just trying to understand conceptually how this works.

FYI currently appleseed tells the addon the location of the tile it’s rendering and then that info is used to put the begin_result in the right place.

Yes, I think that would likely work.

Hi Brecht. I got it to work. Unfortunately the issue appears to be on appleseed’s side. It dices the image place into tiles from upper left instead of lower left like Blender. When the tile size it set to an even divisor of the render size it works, but the second any partial tiles enter the picture it doesn’t.

Thank you so much for your help.