Real-time Compositor: Feedback and discussion

Can’t you use the Convert Color Space node for that?


Or just switch the whole scene from Filmic to Standard

Great to see all the nodes being completed. Would it be possible to put the Inpaint node next on the porting schedule? It’s extremely useful for greenscreen & keying operations.


This node is particularly hard to implement because it uses a serial algorithm that is hard to parallelize on the GPU. So we will likely try to find an algorithm that produces similar results and is easy to parallelize first, so it might take sometime to get there.


Is there a projected timeline for layers or passes to be integrated? Which one is likely to be added back first? Are there special problems implementing them?

Passes will be added first, while layers are not currently being worked on. Refer to this post by Sergey as an answer:


Hey @OmarEmaraDev , I’ve been trying to use the glare node in 4.0 alpha but it’s too broken, it give weird strong results compared to the old cpu one and to the bloom from eevee, there’s a problem with it’s clipping, check the video.

Cycles 4.0 ( viewport comp ) Size is 6 but as you can see there’s too much glare

Cycles 3.6 ( render comp ) expected result

Eevee 4.0 with bloom ( viewport comp )

It looks like what you actually need to match EEVEE’s bloom is to reduce the mix factor as well.
As for the Render Compositor, this is unfortunately a known difference that we will eventually fix and unify.


Is there a chance to make it for 4.0? Right now it’s not possible to use bloom in viewport as it’s not accurate. There was a workaround that you’ve posted but it’s not working as it should.

No unfortunately, an accurate implementation requires convolutions, which are very expensive to do if not implemented right. So it is going to take some time.

Can I ask for clarification about your workaround you’ve posted before? As I see some people would ask for this and I guess this would help them.

So as I understand, this workaround should be working only in viewport (to match the CPU bloom) and for final render it should be muted, right?

Let me clarify that the Fog Glow implementation in the Realtime Compositor is essentially a higher quality and more temporally stable version of the Bloom implementation in EEVEE. This is more of an artistic effect than a realistic approximation of eye or camera response to highlights.

On the other hand, the Fog Glow implementation in the CPU Compositor is a not-so-good approximation of the response of the eye or camera to highlights, implemented by convolving the image with a specific point spread function, which in layman terms means we spread the highlights into neighbouring regions in a specific way described by this function. This roughly takes the shape of an exponential function, that’s why I proposed you approximate it using an exponential curve as you demonstrated in your image.

So your assessment is correct, final renders should have the RGB Curves node muted, since it already takes an exponential form. Though don’t expect the workaround to work nicely in all cases, since trying to adjust the point spread function by adjusting its result is fragile.


Should ask something regarding this. If this patch for GPU Anisotropic Kuwahara gets merged, will the CPU implementation be updated to use the same one as GPU. Mainly asking since using GPU compositor for final renders is still not a thing, and i do prefer how the new implementation looks by a lot.

1 Like

Yes, if this patch gets approved, we will follow it with a patch that implements the same method for the CPU.


There is an option to enable GPU compositing on the final render in Blender 4.0. The steps to do this are as follows:

  • Select from the top of Blender Edit -> Preferences
  • In the preferences window select the Experimental tab
  • Enable the option called Experimental Compositors
  • Open compositor node editor
  • Tick Use Nodes
  • In the side bar (Access by pressing N) select the tab called Options
  • Change the Execution mode to Realtime GPU

Tested Anisotropic Kuwahara in viewport with the official hair demo. Looks very convincing and fast!


Massive difference between Viewport and vinal render. Glow is almost entirely lost. In the final you can see white outline around hair, that is from glow, but anything outside of silhouette is lost. This is the setup I’m using, basically it’s using Alpha channel as mask for mix node between regular image and glowy image.

Can anyone help me make final render match viewport compositor?

You are using the alpha of the render as the alpha of the composite. This will clip everything in your node setup that lies outside that matte.


I knew somebody would comment this I shouldn’ve unplugged. No, that’s not it. I was trying without that and same result, I plugged that as last resort to see if it would change anything but didn’t. Regardless of that Alpha result is the same. And anyway, if alpha was to change anything it would change in Viewport too, since I also plugged it in Viewer Node.

See five posts above