Real-time Compositor: Feedback and discussion

To simplify the mental model of the compositor and make it more intuitive. Consider a scale down node followed by a scale up node, one would expect pixelation in that case, but concatenation of transform nodes would turn that into a no op.

You probably do get some degradation due to double resampling, but it is rather unlikely likely that the user would concatenate multiple transform nodes, and that’s a price we pay for the aforementioned behavior.

Also, translation is not applied, only rotation and scaling, so fractional resampling due to translation i not an issue.

2 Likes

This reads as if it’s being done for people who don’t understand compositing, rather than people who do. And perhaps those who don’t, should learn.

I think an option that might satisfy both camps might be to simply add a checkbox to the transform node, labelled “rasterize” or something similar. Let the nodes downstream eval the scale accordingly.

2 Likes

What practical application do you have that demonstrates the need for delayed transformations?

Quick example - when nesting a comp within another, using a layer a matte source. It’s common that one wants to maintain the original matte’s resolution, not have a forced downsample and then re-upsample.

5 Likes

We regularly use clips that are 5K or even 6K then crop in to deliver at 2K or 4K so the framing can be changed. If there are transformed elements that are already down sampled then it can cause problems when you change export resolution.

The whole problem with this is it is treating the display device that the composite is being made on
as if it is the final output device which usually isn’t true.

If this was going the other way, increasing the scale by x2.0 then transforming by x0.5, you would expect that to be a NO OP. The behaviour for scaling down then back up should be the same for consistency.

In any case, both Nuke (concatenation) and Photoshop (Smart Objects) delay transforms for good reason.
https://learn.foundry.com/nuke/content/comp_environment/transforming_elements/node_concatenation.html

I’m trying to find the original design docs for the compositor because I’m sure there was a discussion
about exactly this.

7 Likes

There you have it, two specific practical applications

Thank you for sharing this information. To contribute to the Real-time Compositor project, I’ll focus on providing feedback on UI/UX concerns related to the viewport compositor.

@thorn-neverwake @John_Cox I am sorry, but I can’t seem to follow either examples, can you clarify them?

Is Fog Glow also removed for CPU in 4.2? It doesn’t work.

Simple Star also has those weird cut-offs that isn’t present in 4.1

image

Fog Glow for GPU became the new Bloom option, so just moved, not removed.

Can you open a bug report for the simple start thing?

I can’t make sense of “using a layer a matte source”, I guess there is a word missing in that sentence, but even then it’s not clear to me how this relates to mattes, or why downsampling would be involved in using the matte.

But for “nesting a comp”, I can imagine generally that if you have e.g. some transform node inside a node group to position an element, and one outside, it makes sense to chain the transforms when possible.

It’s not clear to me why changing the export resolution would be a problem. Also in Nuke, many nodes will rasterize pixels. What I’m guessing you are referring to is more is preserving pixels outside the display window, related to this design task?

If so, that’s related to chaining transforms, but also something different where those pixels are preserved for many more nodes. This is planned to be added at some point.

1 Like

Correct me if I’m wrong, but I don’t think e.g. Nuke would preserve resolution there either? The node that mixes the two images would apply the transforms, after which upscaling would not recover the full resolution.

I’ve tried to avoid referring to other software behavior, as it’s somewhat taboo here and I try to respect the reasons for it. I don’t use Nuke so cannot compare. But it’s the same concept as smart layers in Photoshop, or “continual rasterize” in After Effects.

I think it’s not the same though. If all you are doing is transforms and maybe blend modes, those can be chained together and resolution can be preserved. However this is not practical for arbitrary image operations.

There’s a certain balance here between where resolution gets preserved and where not. This involves trade-offs regarding memory usage, GPU efficiency, predictability, etc. So it’s useful to understand the use cases specifically.

I think the general example of scaling down, applying arbitrary operations, and then scaling up can not be expected to preserve resolution.

In case of smart layers/objects they are preserved in the original resolution during transformation and only displayed as scaled/tilted/rotated/skewed/etc. as long as there is no need for changing pixel values. If you want to change pixel values the transformation must be applied first or you can revert the transform and modify the original image then transform it again.

Use case would be scaling up/down within the 0-1 range of the original size without loosing original quality at scale 1.
Without the ‘smart layer’ mechanism if you scale down the layer from 1 to 0.5 and scale to 0.8 the image would have worse quality than if you scale from 1 to 0.8. Same goes for most of the transformations.

All of this is especially useful if there is a need for changing eg. scale of the image or parts of it, but you are not sure about the final size.

2 Likes

Right, so the equivalent in compositing nodes would be to reorder the transform node so it is placed before the node that mixes colors, applies a matte, blurs, etc.

Since compositing nodes are procedural already, tweaking a scale value in a transform node does not permanently alter the image anyway, as it would when not using a smart layer in an image editing application.

3 Likes

Consolidating consecutive transformations/realizations was something we supported and tested for the GPU compositor at some point. For instance, see #111216 - Realtime Compositor: Realize rotation and scale for filter nodes - blender - Blender Projects. That also supported concatenation across pixel-wise operations like Mix nodes. There were some implications however that made us just immediately do the transformation then. But it was something that we recognized as useful then and still do now.

In any case, I will bring your feedback up in the next module meeting and discuss it further with other developers then.

8 Likes

Hey @OmarEmaraDev – there are a lot of instances with complex animation that are easier done with multiple transforms in a row, sort of like a 2D rig where one is in charge of moving a layer and another is in charge of rotating it and another is in charge of scaling. This allows you to use different pivot/anchor/center points for each aspect of the overall transformation. You want the result of these transforms to concatenate so there is only one filter hit/sampling step. The other two examples are also common and valid, the first is more of an After Effects workflow and the second is more of a Nuke thing, but in both examples, the compositing engine is avoiding sampling multiple times at all costs by concatenating all possible transforms automatically. There are rare times when you want the filter hits, and there are toggles to do this in most compositing software. You are suggesting that Blender would either not have a concatenation option, or that it would be off by default, I would recommend that you choose the opposite course and have it be something a user can opt to turn off if that’s what they really want.

3 Likes

Since Translation is not immediately realized, changing anchor/pivot is already possible without multiple sampling. But yes, if you want multiple transforms with different pivots, then you will hit multiple sampling and my comment above applies.

The conclusion is that concatenation is a nice to have and we will probably implement it by default, the question of whether concatenation should be done for pixel-wise operation is still something we need to discuss.