Compositor Improvement Plan: relative space

Hi there,

Currently when users change the render size percentage of a scene the compositor tree needs to be adjusted as a lot of composite nodes aren’t relative. This has a lot of impact on performance of the user and the system. This is part of https://developer.blender.org/T74491.

A solution can be to introduce a Pixel factor to any buffer in the compositor. The term Pixel Factor is not really correct, but I couldn’t come up with a better term. (suggestions are welcome).
The pixel factor is the scale factor between the used input buffer and the input buffer that would have been used when the render resolution percentage would have been set to 100%.

For render layers the pixel factor is the same as the render resolution percentage. For movie clips and images a pixel factor of 1 is used.

Nodes that use parameters in pixels could use this pixel factor to adjust its effect. Nodes that use multiple input buffers would select a pixel factor for the output buffer. This depends on the node, for example when mixing a render layer on a movie clip, the pixel factor closest to the pixel factor that would be used by the compositor node would be selected for the output buffer.

The compositor output node would scale the input buffers to match the scene render percentage.

There are nodes where the pixel factor won’t work:

  • Glare Node: Iteration factor could be scaled, but could lead to different results.
  • Filter Node: Uses a Convolution filter. When working on scaled down buffers this could lead to visual artifacts
  • Despeckle: Uses arithmetic with the neighbour pixels. Might not have visual artifacts.

There is new compo implementation https://github.com/m-castilla/blender/tree/compositor-up but it seems that this branch doesn’t tackle this part of the ticket.

Questions:

  • Is having a pixel factor the way to solve the problem?
  • What should be done with the nodes that cannot work with a pixel factor?
  • Should the Image/Movieclip pixel factor be dependent on the compositor output resolution? (Quality vs performance)
8 Likes

Fusion uses relative / normalized coordinates. And its very useful. Most tools adhere to this, and it makes creating templates and macros/groups much much easier. A coordinate of 0.5,0.5 is always in the center of screen etc.

The only tools that don’t work are things that are inherently pixel offset based. Like computing optical flow.

Thats a +1 from me.

4 Likes

In video games the term “resolution scale” is common

1 Like

I assume the Pixel factor you’re referring to is a purely internal value. There is no reason for it to be exposed to the users: as far as artist is concerned thing should “just work”.

I do believe that for artists operating in pixel space is more natural. For example, blur of 16 pixels is more clear for them that blur of 0.0083 (which is 16 pixels normalized to FullHD resolution). Surely, sometimes it might be more clear to expose normalized transform (similar to Gimp’s scale where its possible to switch form pixel values to percentage), but to me introducing such “units” “toggles” is a separate topic.

The way I see it the mental model of nodes pretty much stays the same. The difference would be that the pixel values are in 100% render resolution space. So if one sets render resolution to 200% then the effective blur size becomes 32 pixels, and at 50% render resolution it becomes 8 pixels.

So in this terms the proposed Pixel factor seems to be aligned with the initial idea when T74491 was written down. Not sure why it should be per-buffer. To me it seems it should only be taken into account when the compositor tree is converted from bNode's to OperationNode's.

The Glare node has different issues: is single threaded. There should be another algorithm which will be multi-threaded. For the time being, having some half-decent approximate scaled result will be sufficient. Other nodes you mention here I think are expected to have possibly different pixel-to-pixel results (and you can’t scale down without artifacts in the current compositor anyway ;).

Are there any other “problematic” nodes?

The images/clips are to be perceived scaled according to the pixel factor, so that they work nicely for VFX shots.

6 Likes

I definitely agree with Sergey on this point.

I love this whole idea, but does this add calculation time to the current speed of the compositor? The main reason any compositor would scale a project down to 50% would be to speed up the ability to dial in effects and see the changes. I’m guessing that, continuing Sergey’s example, if it scales a blur of 16 down to 8, it would definitely speed up the processing. Is that correct?

Not necessarily. I don’t expect the speed to change. However for the user you can now just change 1 slider to have the compositor calculate in a different relative space. You don’t have to manually change the resolution of each node.

Yes that should be the case.

I agree with what Sergey is saying. I think it’s possible to make everything work in relative space without changing the UI as it is now, it’s just an implementation thing. The UI may be changed but thinking in what kind of inputs are best for user to understand and change.

You can still give the user the ability to input relative or absolute values and internally it would be converted depending on the current “inputs scale” as I’ve called it in the option I added in compositor-up branch, because it does exactly that. It’s just a matter of explaining the user that the values he is introducing are always for the “Inputs Scale = 1.0” case.

I added “Inputs scale” because resolutions are mostly calculated from inputs to output, if you change the resolution percent of the output it may affect inputs like “Render Layers” or “Video Sequencer”, because they are themselves outputs in blender, but not inputs like “Image” or “Video” which can be any resolution.

The “scale inputs” option was easy to add and added it as an experimental thing, it certainly needs everything in relative space to work correctly. I’ll try to implement this when I finish adapting all nodes, I’ve just mainly focused on performance and making all nodes work as before. But, as it has already been said, for effects that may be pixel based and only a few pixels, say for example 2 pixels, if you scale to 0.8, the effect will keep being 2 pixels (using rounding) and you’ll see more of that effect. I guess we have to accept and the end user, that in some cases we may get a very different result if you scale everything down than with scale 1.0, it’s just tries to be an approximation for fast previewing.

If user just want a smaller viewing and 100% accurate result, the viewer zoom should be changed instead, which just scales down at the end of the tree. The “inputs scale” option is meant for better performance at the cost of output accuracy/quality. That’s why it’s in the performance section.

2 Likes

A solution for the pixel based effects so that they produce the same result (approximately) as in full resolution (inputs scale=1.0), is that the algorithm automatically checks that there is going to be a rounding problem and increment or reduce further the resolution until it doesn’t have the rounding problem and then after operation is done scale it back to the original resolution. It would work specially if there is only 1 pixel-based parameter in the algorithm involved, if it’s 2 they would have different rounding problems and it may be needed to do in full resolution. It’s just an idea I’ve come up with, I might not been taking something into account. I think it could work at least for the implementation I’ve done.

https://developer.blender.org/T80562

1 Like