Currently when users change the render size percentage of a scene the compositor tree needs to be adjusted as a lot of composite nodes aren’t relative. This has a lot of impact on performance of the user and the system. This is part of https://developer.blender.org/T74491.
A solution can be to introduce a
Pixel factor to any buffer in the compositor. The term
Pixel Factor is not really correct, but I couldn’t come up with a better term. (suggestions are welcome).
The pixel factor is the scale factor between the used input buffer and the input buffer that would have been used when the render resolution percentage would have been set to 100%.
For render layers the pixel factor is the same as the render resolution percentage. For movie clips and images a pixel factor of 1 is used.
Nodes that use parameters in pixels could use this pixel factor to adjust its effect. Nodes that use multiple input buffers would select a pixel factor for the output buffer. This depends on the node, for example when mixing a render layer on a movie clip, the pixel factor closest to the pixel factor that would be used by the compositor node would be selected for the output buffer.
The compositor output node would scale the input buffers to match the scene render percentage.
There are nodes where the pixel factor won’t work:
- Glare Node: Iteration factor could be scaled, but could lead to different results.
- Filter Node: Uses a Convolution filter. When working on scaled down buffers this could lead to visual artifacts
- Despeckle: Uses arithmetic with the neighbour pixels. Might not have visual artifacts.
There is new compo implementation https://github.com/m-castilla/blender/tree/compositor-up but it seems that this branch doesn’t tackle this part of the ticket.
- Is having a pixel factor the way to solve the problem?
- What should be done with the nodes that cannot work with a pixel factor?
- Should the Image/Movieclip pixel factor be dependent on the compositor output resolution? (Quality vs performance)