Real-time Compositor: Feedback and discussion

We are trying to make the experimental F12 GPU compositor faster for interactive editing in the node editor. One of our patches is expected to achieve that but also cause performance degradation. So it will feel faster while editing the node tree and lock the UI less, but will be slower in the total time that it takes to compute the final image.

If you have some complex node trees that take quite a while to compute, can you test with and without the patch and let us know the compute time and feel of interactive editing?

8 Likes

I do like that the UI feels more snappy and it gives more real time feedback on itā€™s progress.

Using my node tree, this is my time to final image:

Blender 4.0 : ~35 secs
Blender 4.1.1 : ~35-40 secs
Blender 4.2 patch GPU: ~6 secs

GPU: Nvidia GeForce RTX 3070
Processor: Intel i9-10900K @ 3.70GHz

I also noticed that the compositor recalculates if I just go in and out of a group node without making any changes. Can you prevent that from happening?

2 Likes

I have a very complex node tree that should work well for testing:

I canā€™t currently download the patch build, but Iā€™d be happy to share the .blend file for extensive testing:

Itā€™s relatively small, 13mb. CC0, download, tweak, modify, etc, at your leisure

This is an excellent test file because it relies heavily on compositing and itā€™s a ā€œreal-worldā€ file, not a theoretical example

Thanks for testing. But I am also interested in comparing the build I shared with stock 4.2, using GPU compositor for both.

I can look into it. But going inside or outside a node group can cause a change since the active viewer node might change due to that. So it might not be avoidable in all cases.

Thanks! This will be useful.

1 Like

Tested the build with @josephhansen file, and then played freely with node setups. Everything was nice and fast. No crashes no errors. Nice and snappy job so far.

a simple check for the presence of viewer nodes when entering/exiting?

btw I noticed that full recalculation is done also when adding nodes that donā€™t go into final calculation. I mean add a mix node, unlinked from any other. Change its values and the full network is re-evaluated.

1 Like

Blender 4.2 GPU: 5-8 secs
Blender 4.2 patch GPU: 6-9 secs

It would be nice if the compositor didnā€™t recalculate with the following conditions:

  1. No viewer node exists
  2. A viewer node exists, but isnā€™t plugged into anything
  3. A viewer node exists, it is plugged into something, but itā€™s muted.
1 Like

@OmarEmaraDev

How does one scale an image and still keep the final image within the size of the render resolution?

Here in this image if I scale using the transform it grows beyond the size of the render, if I add a ā€œscaleā€ node and set to render, then the image (the deaths tar should be cropped in theory) does not change at all.

It seems like the Viewer node is not able to preview this type of node sequencing (given this image below renders as intended but not shown as intended).

Here is when it is rendered, see it is cropped properly in the render but not in the viewport.

1 Like

The Crop mode in the Scale node is probably not what you think it means. The scale node never crops anything, it scales the image by a factor such that the bigger dimension of the image would get cropped if it gets constrained by the render region. The viewer node does not constrain itself to any region, so no cropping occurs and you instead just get the scaling. The composite node constrain itself to the render region, so you get the desired cropping.

1 Like

Sure, but would not make sense that the viewer node shows what the final render shows? See the one where I disabled the ā€œscaleā€ node (the bottom image), it makes it hard to judge what the user sees vs what the final image, there needs to be a way to match the final image, at least some kind of framing guides that lets the user to judge where and how the image bounded.

As far as the cropping goes, then how would the user crop a scaled image that goes beyond the size of the render size in the current system?

2 Likes
  • A counter argument to making the viewer restricted to the compositing region is as follows. If you have an image that you would like to view or adjust which is not the same size as the render size, would it make sense to crop it? How would we allow users to view entire images if we always clip to the render region?
  • The general assumption is that the user will eventually mix/overlay the image on the render result or other resources with the same size as the render, which will naturally crop it to the render region. Thatā€™s why users rarely notice the difference between the viewer and the composite nodes.
  • If the aforementioned point is undesirable for some reason, then you will have to use the Crop node.
  • We recognize the confusion that this might cause, and have previously discussed it in this task. So feel free to add your feedback there for completeness.
1 Like

Thanks for the link, I was not aware of the prior developer discussion on it.

My current issue with the stacked transforms and the viewer node behavior is that it gets complicated once there are multiple scale transforms coming from different inputs. I agree with all the points you raised, but that is not how other compositor apps work. There is always a frame to judge against in those apps.

Also the crop node does not seem to work, it disables the final output when enabled in this setup

Relative size (used the handles to setup the frame)

2 Likes

This might be because your ends are flipped, left should be lower than right and bottom should be lower than upper.

1 Like

Iā€™m curious about a question that Iā€™m not sure should be discussed under this thread.
Iā€™m wondering if there will be a GPU acceleration switch added to the Noise Reduction node?
Or will there be an option to use OptiX or OpenlmageDenoise noise reduction?
(Iā€™ve been working on multichannel noise reduction recently, and Iā€™m thinking about the cost-effectiveness of multichannel noise reduction, and it seems that using a lot of time to use the CPU to reduce noise in each channel is not as practical as turning on the render noise reduction switch, and I donā€™t seem to be able to feel much of a difference between the two results).

Just writing this to voice my need for image tiling in the compositor. I understand why it is not working anymore, but this breaks multiple post processing effects of mine, including some iā€™ve sold to customers. This is a real problem for me atm.

OpenlmageDenoise will eventually get GPU acceleration support. See #115242 - Compositor: Enable GPU denoising for OpenImageDenoise - blender - Blender Projects. But I doubt per-channel denoising will work better for that.

I brought it up in the module meeting yesterday, so we are reconsidering our decision to address your concerns. @thecali Would you prefer the old way of doing tilling or would you rather have a Tile node with a number of tiles to repeat in rows and columns, or would you like both?

2 Likes

Omar,
Blender is fortunate to have you on the board, you are doing a great job.

Do you think it would be reasonable to add new nodes like for example the ā€œVibranceā€ node?

As I understand Vibrance raises the saturation of the less saturated parts of the image.
We already have a ā€œSeparate channelsā€ node from which we can withdraw the saturation channel to use it as a multiplier for the original saturation of our image.

The same goes for the ā€œHighlightsā€ and ā€œShadowsā€ nodes and the Lightness / Value channel.

Similar tools could be found in Adobe Lightroom (even my smartphoneā€™s default photo editor has these). So adding these nodes seems like low-hanging fruit.

And although we all can DIY these nodes, it would be cool to have them under the belt to streamline the post-correction workflow.

2 Likes

I think your counterarguments are valid. On the other hand, when the viewer is connected to the same node that feeds into the compositor, seems pretty reasonable to expect that they would output the same result, from an user perspective. Maybe the composite node should have an output, so you could connect the viewer to it?

Edit: I hadnā€™t see the task link, thanks!. I also commented there

Thanks for the reply!

Iā€™d personally prefer the old wrapping behaviour, because it is very usefull to overlay an image with a repeating pattern (for example a small 8x8 dither pattern). A tile node with a defined count would be very handy, too. But imo, filling the canvas with a pattern is more important.

1 Like

Do you think it would make sense for the compositor to have an overlay that shows the render size similar to how the Camera View works in the 3d viewport? (Including the passepartout opacity).

I think it would be pretty handy to have an overlay like this, easy to toggle on/off as needed and it doesnā€™t depend on what operations youā€™re doing on images/footage, and it wouldnā€™t change any viewer node either.

3 Likes

Well, it is from 10 months ago. Hard to know if a discussion from last year is still current, under active consideration, or was abandoned.

1 Like