Real-time Compositor: Feedback and discussion

I will look into it, maybe we should be doing some energy conservation mapping.
But first, at this point, I will move Bloom into its own new option and implement it for CPU compositor, until we implement Fog Glow for GPU.

4 Likes

Optimisation question :

When a Mix Color node with the “Mix” blending mode has its factor set to 0 or 1, are the nodes linked to the opposite input ignored or are they processed anyway ?

They are currently processed anyways.

1 Like

And I guess there is no workaround to conditionally shutdown a part of the node graph, right ?

Not for the GPU compositor at the moment, no.

We added a new mode to the Glare node called Bloom, which you are probably familiar with from EEVEE.

  • This is implemented for both CPU and GPU, so you should be able to get identical results.
  • It is somewhat similar to the Fog Glow mode, except it is significantly faster to compute, has a smoother falloff, and greater spread.
  • It is currently not “energy conserving”, so you should probably use a mix factor closer to -1. We plan to address that in the future.

  • Bloom was previously temporarily used in place of Fog Glow for the GPU compositor, so Fog Glow is now unimplemented for GPU once again.
  • The plan is to replace Fog Glow with a faster more physically accurate implementation for both CPU and GPU.
39 Likes

Glad to finally see this!
Since the mode is no longer limited by having the same options as glare, could it be updated to include the additional options from EEVEE-Legacy bloom? Namely:

  • Float size/radius, that’s not hard limited between 6-9
  • Knee
  • Clamp (this one might be possible to do with nodes?)
1 Like

I wonder if this is final or you dont add inputs to the node?This way you could make custom node group inputs from.Additional you could further use the inputs for the use of masking etc.

  • Float size/radius, that’s not hard limited between 6-9

I am reluctant to support float radii, at least in the same way EEVEE implements them, since they produce instability in how Bloom is computed. But I can look into other implementation methods.

  • Knee

This will be added for all glare types indeed.

  • Clamp (this one might be possible to do with nodes?)

Probably not going to add it to the glare node, since clamping introduce light clipping, which is undesirable in most cases. That can be done manually as you noted.

I wonder if this is final or you dont add inputs to the node?

We used preexisting node options, so we didn’t add any new inputs. But I understand what you are after, and that’s a project that we intend to tackle, though not very soon.

4 Likes

Do you think that this maybe a way to work around this bug at the moment if a user were to use object ID as a way of masking?

These are available in real-time already in the viewport. How are these render layers meant to be used at the moment? I simply do viewport renders to use them. What if these had check boxes next to them or add as nodes/ switches or all of the above? Blender render slots maybe could be utilized perhaps?

Could each layer act as a way of stacking the viewport through nodes? Throwing ideas around is ok right?

No. The information is not available to the compositor and thus can’t be made visible through it.

Those will be available as part of the render layer node in the future. But for now, they just change what the combined output of the compositor will be.

4 Likes

Hi there, I was playing with Blender 4.2 and, surprise!, the denoise node is now supported. Cool!
Yes, cool but… any chances it will be using GPU somewhere in time?

Yes, though it appears we will still pay the cost of CPU<->GPU memory transfer for the initial implementation.

2 Likes

I’ve just read in the 4.2 release notes that Transform nodes get applied immediately.
https://developer.blender.org/docs/release_notes/4.2/compositor/

What’s the justification for this? All the major node based compositors concatenate image
transforms to avoid resampling until absolutely necessary to prevent image degradation.
It’s a core design principle for Nuke to keep full image resolution until the composite is
reformatted or rendered.

Wouldn’t a Rasterise node make more sense?

9 Likes

To simplify the mental model of the compositor and make it more intuitive. Consider a scale down node followed by a scale up node, one would expect pixelation in that case, but concatenation of transform nodes would turn that into a no op.

You probably do get some degradation due to double resampling, but it is rather unlikely likely that the user would concatenate multiple transform nodes, and that’s a price we pay for the aforementioned behavior.

Also, translation is not applied, only rotation and scaling, so fractional resampling due to translation i not an issue.

2 Likes

This reads as if it’s being done for people who don’t understand compositing, rather than people who do. And perhaps those who don’t, should learn.

I think an option that might satisfy both camps might be to simply add a checkbox to the transform node, labelled “rasterize” or something similar. Let the nodes downstream eval the scale accordingly.

2 Likes

What practical application do you have that demonstrates the need for delayed transformations?

Quick example - when nesting a comp within another, using a layer a matte source. It’s common that one wants to maintain the original matte’s resolution, not have a forced downsample and then re-upsample.

4 Likes

We regularly use clips that are 5K or even 6K then crop in to deliver at 2K or 4K so the framing can be changed. If there are transformed elements that are already down sampled then it can cause problems when you change export resolution.

The whole problem with this is it is treating the display device that the composite is being made on
as if it is the final output device which usually isn’t true.

If this was going the other way, increasing the scale by x2.0 then transforming by x0.5, you would expect that to be a NO OP. The behaviour for scaling down then back up should be the same for consistency.

In any case, both Nuke (concatenation) and Photoshop (Smart Objects) delay transforms for good reason.
https://learn.foundry.com/nuke/content/comp_environment/transforming_elements/node_concatenation.html

I’m trying to find the original design docs for the compositor because I’m sure there was a discussion
about exactly this.

7 Likes