Real-time Compositor: Feedback and discussion

Better integration with render engines is planned and would be good to have indeed. Though the priority now is to concentrate on feature coverage.

This is a consequence of infinite canvas compositing, which is supported by the realtime compositor.
Essentially, when you scale the image up by 10x, it occupies a larger space in the infinite compositing space, but still has the same number of pixels.

Now, for the Alpha Over node, the “main input” of the node is the background input, which means the output will have the same number of pixels and will occupy the same space as the background input. But since the cube occupies a small space relative to the background, the pixels that it will effectively cover will be small, hence the pixelation you see.

The existing compositor just decides to crop the images to fit the render size as you can see, which shouldn’t happen in infinite canvas compositing.

This is definitely a workflow issue that I plan to tackle. For instance, the most obvious solution is to let the user choose which input is the “main input”. If you are adding a background to your cube, then the cube should be the main input, but if you are adding a cube to your background, then the background should be the main input. So I am aware about this.

There are two different Metal backends in Blender. First, we have the Cycles Metal backend, which is a compute backend that is exclusive to Cycles, so it is not related to our work. Second, we have the GPU module Metal backend, which is still under development and we will likely need to wait for it to be implemented almost entirely. You can track the progress here:

https://developer.blender.org/T96261

I will make sure to let you know if there is any functional builds for Mac as soon as they are available.

4 Likes