Improved Render Compositor: Feedback and discussion

Think post would be better if it explained what EXACTLY changed. There are so many compositors I’m sure many like me are confused.

So, if I understand correctly what changed is “backdrop” compositor? Viewport is same. But also when you render F12 its different compositor? What if I render with GPU? I’m a bit lost.

Also, in experimental compositors there was GPU compositor, which in my tests looked much better than other two. I understand we now have full-frame compositor. So what is the plan for GPU compositor?


The release notes do explain what exactly changed, see the link in the post for more details.

The entire CPU compositor changed, that includes the backdrop compositor and the F12 render.

The GPU compositor is still experimental, but we hope to also move it out of experimental in v4.2.
Once this is done, we are going to have an “Execution Mode” option in the performance panel, it will let you change between CPU execution and GPU, for both backdrop and F12 renders.



Is it also a plan to make GPU and CPU compositors look same? Or will it change if I switch from CPU or GPU in mid-project (not that I plan to)

And also, you said CPU / GPU compositors will affect both backdrop and F12, is that same for Viewport? Or is it always the same

Yes, the goal is that CPU and GPU will be identical, which is one the things holding us back from moving GPU out of experimental.

This change has nothing to do with the Viewport Compositor and will not affect it in anyway. But, while working to unify both CPU and GPU as mentioned in the previous point, the viewport compositor will also change to match them.


I personally prefer the new scaling behavior. The current method makes confusing situations in the compositor sometimes where a scaled imaged is not scaled to intended size which affects the frame size of the final compositing. Does the new transform changes also apply to cropping related image frame size changes?

What are the projected speed gains with the new compositor overall?

Are we talking about the Crop node or what exactly? The Crop node haven’t really changed.

That depends on your setups, so I can’t give a meaningful answer. For some setups, the performance gain is small and for other setups, it is several times faster.

1 Like

Yeah I meant the actual image cropping, I am not sure if the cropping works the same way as the current transform nodes which keep the original image size until it does not.

First thanks for working on this Compositor.Since this is feedback and discussion i want to ask what about a Cie Lab colorspace ?.Many known apps have this for colorcorrection or grading.
Then i have a question about value accuracy.Eg,i have rebuilded a colorspace transform matrix which has many decimal values behind the decimal point.In Blender half of the values gets “cutoff” and rounded.
This is a bit sidetopic but happens to shader node values as well.
For the most time the accuracy is enough.However if you want to have the value of the Planck constant for build a blackbody radiation equation,then the value does not work because its to small.Iirc 6,6*10^-34
I mean even openoffice calc can handle these numbers,why not in Blender?Would it be possible to get the accuracy for numbers and calculations,but if the end result needs to be rounded off for the shader and compositing bitdepth or whatever is the limitation?

It is somewhat similar to the Sampling Space example, where cropping will happen in that same clipped inferred canvas. So not exactly a difference in the Crop node, but the overall size inference.

This thread is specifically about the improved compositor. So this should maybe go here:

But the limited precision is a known limitation, since we do everything in single precision, and using double precision would be much slower.


It’s good to see that the viewport compositor canvas size is now locked to the camera frame when in camera view: EEVEE & Viewport - Blender Developer Documentation

Although there still seems to be some issues here; as you zoom out, the canvas size correctly shrinks along with the edge of the camera frame, but zooming in doesn’t have the inverse effect. You can see this issue with a box mask overlay - it shrinks when zooming out, but doesn’t scale up in the opposite direction. I can post a couple of screenshots if this isn’t clear.

Related to this, could the canvas centre also be locked/tied to the camera view centre and not be affected by any additional panning/offset of the viewport (shift + mmb drag)? As any offset will also change the results between the viewport and final render.

The issues you are describing are known, and is similar to #111344 - Compositor: Viewport issues when zoomed in - blender - Blender Projects. The solution would be to introduce a distinction between data windows and display windows, which is something I will do once we finish the things at hand.


Yes, this is the issue.

The general performance of the new viewport compositor is great. It feels much closer to the responsiveness of commercial programs like Fusion/Resolve, so it’ll be great if this translates over to the render compositor. So thanks for working on this.

On another point, I noticed that the VSE in 4.1 now has a somewhat improved downscaling filter (2x box, which is much better than the default bilinear and slightly better than the previous 3x3 subsampling option): Compositor & Sequencer - Blender Developer Documentation

Could something similar be looked at as part of the compositor re-design? The current interpolation/filtering in the Transform/Scale nodes when downscaling is poor. I raised this issue here:
(unfortunately the attached images there aren’t visible now)


My understanding is that the current “compositor re-design” is not so much of a “re-design”, and more of a “cleanup, performance improvements and unification of CPU vs GPU behavior”. That said, improving filtering is certainly an area that could be looked at in some future, but perhaps separately from the current “finish the GPU compositor to be feature complete” effort. Both in terms of better filters when scaling down (e.g. box like VSE now has, or something better like EWA), and in terms of possibly adding Cubic Mitchell (instead or in addition to current Cubic B-Spline that introduces a lot of blur), etc.

Several “off by half a pixel” issues have been just recently fixed in the compositor though, where some of them could have led to an issue of “scaling down exactly 2x using linear filtering does not actually do any filtering” issues. So maybe that particular issue is already fixed.


Yes, this depends on how wide/narrow the scope of the work is. But I thought it was worth mentioning now, in case there are any architectural changes being decided on that might affect how easily any changes to the filtering/scaling can be done in the future.

Scaling is one of the key operations for compositing, so I find it really surprising that this has been unresolved for such a long time. But hopefully, it might not be too complicated to fix going forwards.

Hi @OmarEmaraDev , I tried the latest build and GPU compositor is so fast! Thanks for implementing the feature. I noticed when using the Denoise node it slows down the compositor, is this a limitation or denoise node is being worked so it will also use GPU?

There is a patch to support GPU denoising for the compositor, however, it will probably be for 4.3.


Hi, can we have a node similar to “Constant Node” from nuke or natron?

  • It should have a constant color which can be used as a BG color node.
  • By default it will respect the render setting size set in render panel.
  • might have resolution option inside the node to define custom height and width.


  • It will be helpful to retain the resolution while mixing various resolution media. Otherwise previously we need to make a blank image in gimp to connect as first image.

Maybe you can achieve the same with the image node? It lets modify the color, resolution, alpha etc…

1 Like

I think this can be used. With a scale>render size>stretch. Will try it. Thanks.

BTW can you see if this is a bug or not. The new comp size inferring feels a bit confusing TBH

This nodesetup is working pretty nicely. Thanks for the tip on image node.

1 Like