Real-time Compositor: Feedback and discussion

The stack trace was helpful, thanks. I was able to reproduce the crash with this setup:


I’ll fix it within the next few days.

2 Likes

The updates are slow on grease pencil? They see to be fine on my end

image

@AdamEarle I can’t contextualize your comment, can you clarify?

Could someone please add something about the control of the focal point to the defocus node documentation?
Using Blender 4.0 on an ancient AMD CPU, if I select “Use Z-Buffer” it assumes that the focal point is at 10m. To adjust the focal point it is necessary to multiply the depth information that is connected to the Z input by a suitable constant, e.g. if a focal point at 2m is required use 0.2.
cheers Matt

The focus distance is defined by the Depth Of Field > Focus Distance in the active camera in the scene selected in the node. Camera objects have a default focus distance of 10m, that’s why you noted that.
See the documentation here:

https://docs.blender.org/manual/en/latest/compositing/types/filter/blur/defocus.html#camera-settings

@OmarEmaraDev do you know if there are any plans to improve video scopes performance with the compositor. (when exposed , performance takes a hit).

1 Like

I am not aware of any plans, but I remember @aras_p did do some optimizations for VSE scopes, not sure if the scopes implementation is shared with the image editor ones. If not, maybe we can look at it at some point.

I looked at whether the scopes code between Image space and Sequencer space could be shared, but gave up on trying to untangle the Image space code. It does have the whole hardcoded special “UI button that shows the scope graph” widget that is quite cumbersome to factor out, especially for me who knows nothing about the UI code in Blender :slight_smile:

Is this question about the scopes in VSE/Sequencer, or about something else? (what are the “scopes in compositor” and where can I find them?)

I am actively working on improving the scopes in the sequencer, first step is making them not look like something from year 1998.

A larger question is, should scopes data be computed on the GPU instead. That is both way more suitable for this kind of calculation, and also makes way more sense especially in the GPU/Realtime compositor. Since right now, if composition happens on the GPU, then the resulting image needs to be read back to CPU, the scopes data calculated there, and sent back to the GPU for display. Which is a lot of unnecessary work! Maybe someone someday should make it happen on the GPU. I could try doing that even, but possibly would need to discuss/agree with someone first.

4 Likes

Scopes in the context of the compositor just means the scopes in the image editor.

It does make sense to compute them on the GPU, but we don’t really have the infrastructure for this to happen at the moment. The realtime compositor still transfer its data to the CPU and back to the GPU to be viewed regardless, but there is currently work to alleviate that soon by introducing GPU storage in image buffers. So we can look into GPU acceleration then.

2 Likes

but there is currently work to alleviate that soon by introducing GPU storage in image buffers. So we can look into GPU acceleration then.

Nice! Yeah let’s look at computing scopes on the GPU then. Also, someone could start thinking about moving VSE things to the GPU at that point, I guess. All these image scaling and effects it does sound way more suitable to be done on the GPU.

8 Likes

It looks like having compositor enabled all scenes or multiple scenes slow down the GPU compositor even those scene viewports are not using the real-time viewport compositor and they do not have any open windows. I definitely get a speed bump in the regular compositor if I disable compositors in other scenes. I know we had some reports around unnecessary calculations. I am wondering why unrelated scenes might be calculating unseen or unneeded compositor results.

Another thing is that the full frame compositor has much faster interactions with the node elements in complicated setups (Blender file with multiple scene) I am guessing that the CPU based compositors are not calculating stuff from other scenes.

2 Likes

An interesting recent thread
https://twitter.com/JonLampel/status/1752931740090532313
Gave me a sort of idea I’d like to quickly bring up here, a node to automate the the process of setting the View Transform to raw and using the Convert Colorspace node, a “Wait for View Transform” node. Simply put, everything before it is applied before the View Transform, and everything after, after it.
If this cannot or the devs are against this getting implemented, the workaround should at least be placed somewhere in the docs because this isn’t an obvious trick but may be game-changing to some. :slight_smile:

6 Likes

We are currently trying to solve issues with the GPU compositor locking Blender for no reason, once we solve this, we should be able to look into other issues like this. But if you have an easy to reproduce file, feel free to open a bug report just to keep it in sight.

That would be an internal node tree detail implicitly affecting the parent Scene, which probably isn’t a good idea. The documentation does mention this in the Color Management section at least.

https://docs.blender.org/manual/en/latest/render/color_management.html

3 Likes

For camera navigation performance under heavy compositing setup, does something like start sample option (start processing viewport compositor only when the viewport render sample is over this value, similar to the denoiser option) make sense? I wonder if this also can applied to EEVEE.

1 Like

We actually did talk about that last week, so that might make sense indeed, just not sure yet what the user experience will be like, because the compositor can drastically change the render, while denoising only tunes it, so we could be looking at a lot of flickering.

1 Like

I think a ‘no implicit colormanagement please’ checkbox nearer to the node tree would be a good idea though. The menu from the render tab colormanagement settings would maybe be nice to have on the ‘composite’ node as well to make it explicitly clear where the colormanagement happens.

Hi, is more “modernization” work planned on the compositor after it is made fully functional? such as exposing node properties as sockets, adding some types such as boolean/string, changing some of the nodes to their shader/geo nodes equivalent (which are often more complete and/or consistent), etc. Thank you and apologies if that’s been mentioned before

2 Likes

Yes, of course. It is just that the focus right now is on finishing the GPU compositor and unifying it with the other compositors.

9 Likes

Ok, thanks. (I was just asking about planning, there is no need for justification. I am aware of the work you’re currently doing)

This would break a lot of assumptions about color management pipelines, not just in Blender but also for interop with other applications.

I think generally what you want to do is do color processing in different color spaces. In this case a particular display space, but someone might also want to do it in e.g. a log space. And the way to do that is to have a setup with one Convert Colorspace node to convert into that space, then do some operations, and then another Convert Colorspace node to convert out of it again.

The end result should be the same as the final convert colorspace and view transform would cancel out.

1 Like