Real-time Compositor: Feedback and discussion

I hope it’s that simple of a fix. If not, I would hope that the ID mask could reuse whatever code that gives the cryptomatte its smooth-edged mask.

Whether or not the current ID mask output is the “standard”, I think we can all agree that it leaves a lot to be desired.

1 Like

I might be wrong but afaik id passes are non antialiased because the id points to an object or not, there is no inbetween. Makes sense that this looks that way when its out of focus.

Has the mist or the Zpass got implemented?

Thanks

Looks like if the Noisy Image is mixed with the Image in composition, the result is a full black screen for the Noisy Image:
0%mix:

100% mix:

No passes are supported yet. This is the next milestone for the project though.

That’s also because the noisy pass is not supported yet and just returns a zero color.

Well, it is not about how many samples you can take. Lets say you opt to accumulated the IDs of objects, much like other light passes. Further, say you have two touching objects, whose IDs are 1 and 1000 respectively. If you render the ID pass with an infinite number of samples, you will get a smooth gradient from 1 (Maybe linear?) to 1000. Now, how would you generate a mask for the 0 object? Split the difference and use a threshold of 500? What if there is another object whose ID is 500?

So you see, accumulating more samples will not really solve the issue. That’s why we have Cryptomatte.

2 Likes

@OmarEmaraDev I forgot about these. Should I repost these as bugs on Gitea?

I didn’t really verified with latest versions yet, but I think at least some of these are still there. Not sure about things where viewport compositor looks more correct.

1 Like

Most of those are expected differences as far as I can see, with some having a more correct behavior in the realtime compositor.
There is one or two that should be handled though, so I will look into them and submit a fix directly. Thanks for bring them to my attention.

2 Likes

How would the cryptomatte work in the example you gave and why couldn’t the ID mask be adapted to use the cryptomatte’s method on the backend?

Ah yes, that makes sense, there can be no blending of values. I suppose the only way is to have a pass for each ID then. How does Cryptomatte do it ?

@Timothy_m @Hadriscus

Cryptomatte just stores more information per pixel to make this possible. Notice how the view layer has an option called Levels in the Cryptomatte panel? The more you increase the number of levels, the more passes Blender will render and store. The cryptomatte nodes uses all of those passes to generate a good mask.

Well then we would have come a full circle and reimplemented cryptomatte. :smiley:

1 Like

I agree that cryptomatte should be reimplemented because it cannot be used inside node groups. ID masks should also be reimplemented because they can be used in node groups, but they do not produce a usable mask.

Speaking of cryptomatte, is cryptomatte in the real-time compositor a 3.6 target? Or a 4.0 target? It would be absolutely incredible if it was in 4.0 or so! (Selfishly, it would save me hundreds of hours of work on a current project.) No rush, you’re doing incredible work and I’m extremely grateful to you for doing it :slight_smile:

3 Likes

I am not really sure, as there are 3 independent efforts that need to cumulate to make this happen. Namely, we need to implement multi-pass compositing, prepare cryptomatte for realtime use, and implement the nodes themselves. So I wouldn’t be able to give you a timeline for that for the moment.

12 Likes

Thanks Omar the realtime viewport compositor is amazing even on my MacBook, but I just noticed that the distortion effects like rotate and transform, seem to display differently in the viewport to the render output. for example: in the viewport a rotation effect appears to occur at the end of the chain instead of where its located in the order of operations?


In this image the rendered compositor output is at the bottom and looks as I expect, but the viewport at top seems to show the rotation happening after the flip mirror effect, note the edges clipping to alpha

2 Likes

Yes, this is currently one of the differences with the CPU compositor.
I recommend you read the following section in the documentation: Realtime Compositor — Blender Manual

Note that this is recognized as a limitation that needs to be handled, but it is taking sometime to get the design right.

6 Likes

Hi!
Fast Gaussian Blurs render differently in the Viewport and the final Render, I thought it was because of the Relative size and different resolution between the render and the viewport, but even when trying to match them, or even with Relative off, the blur is a smaller in the viewport.

The other blur types look correct, but would it be possible to use the actual Render Resolution rather the viewport resolution when using Relative?

3 Likes

This is an expected difference, since the Realtime Compositor uses an accurate gaussian convolution even for the Fast option. However, note that our future implementation of Fast gaussian will aim to match the accurate version, so the difference will remain, the realtime compositor being more correct here.

7 Likes

Thank you for your quick reply. Blurs are costly in the viewport, is there any recommendation about the cheapest type of blur? I’m not seeing much difference between Flat and Gaussian, I’m guessing only the blur size matters?

2 Likes

All types of blur are identical cost-wise, except for the cost of computing their weights which is negligible and cached anyways. The Fast Gaussian mode is supposed to be orders of magnitude faster than other types, but it is not yet implemented as I mentioned. That’s because our implementation suffers from numerical instability, making it unusable for high blur radii, so we need to do more research to make it work, which I haven’t had the time to do yet.

10 Likes

Realtime Compositor: Implement Fog Glow Glare node :heavy_check_mark:

9 Likes