I might be wrong but afaik id passes are non antialiased because the id points to an object or not, there is no inbetween. Makes sense that this looks that way when its out of focus.
No passes are supported yet. This is the next milestone for the project though.
That’s also because the noisy pass is not supported yet and just returns a zero color.
Well, it is not about how many samples you can take. Lets say you opt to accumulated the IDs of objects, much like other light passes. Further, say you have two touching objects, whose IDs are 1 and 1000 respectively. If you render the ID pass with an infinite number of samples, you will get a smooth gradient from 1 (Maybe linear?) to 1000. Now, how would you generate a mask for the 0 object? Split the difference and use a threshold of 500? What if there is another object whose ID is 500?
So you see, accumulating more samples will not really solve the issue. That’s why we have Cryptomatte.
@OmarEmaraDev I forgot about these. Should I repost these as bugs on Gitea?
I didn’t really verified with latest versions yet, but I think at least some of these are still there. Not sure about things where viewport compositor looks more correct.
Most of those are expected differences as far as I can see, with some having a more correct behavior in the realtime compositor.
There is one or two that should be handled though, so I will look into them and submit a fix directly. Thanks for bring them to my attention.
Cryptomatte just stores more information per pixel to make this possible. Notice how the view layer has an option called Levels in the Cryptomatte panel? The more you increase the number of levels, the more passes Blender will render and store. The cryptomatte nodes uses all of those passes to generate a good mask.
Well then we would have come a full circle and reimplemented cryptomatte.
I agree that cryptomatte should be reimplemented because it cannot be used inside node groups. ID masks should also be reimplemented because they can be used in node groups, but they do not produce a usable mask.
Speaking of cryptomatte, is cryptomatte in the real-time compositor a 3.6 target? Or a 4.0 target? It would be absolutely incredible if it was in 4.0 or so! (Selfishly, it would save me hundreds of hours of work on a current project.) No rush, you’re doing incredible work and I’m extremely grateful to you for doing it
I am not really sure, as there are 3 independent efforts that need to cumulate to make this happen. Namely, we need to implement multi-pass compositing, prepare cryptomatte for realtime use, and implement the nodes themselves. So I wouldn’t be able to give you a timeline for that for the moment.
Thanks Omar the realtime viewport compositor is amazing even on my MacBook, but I just noticed that the distortion effects like rotate and transform, seem to display differently in the viewport to the render output. for example: in the viewport a rotation effect appears to occur at the end of the chain instead of where its located in the order of operations?
In this image the rendered compositor output is at the bottom and looks as I expect, but the viewport at top seems to show the rotation happening after the flip mirror effect, note the edges clipping to alpha
Yes, this is currently one of the differences with the CPU compositor.
I recommend you read the following section in the documentation: Realtime Compositor — Blender Manual
Note that this is recognized as a limitation that needs to be handled, but it is taking sometime to get the design right.
Hi!
Fast Gaussian Blurs render differently in the Viewport and the final Render, I thought it was because of the Relative size and different resolution between the render and the viewport, but even when trying to match them, or even with Relative off, the blur is a smaller in the viewport.
This is an expected difference, since the Realtime Compositor uses an accurate gaussian convolution even for the Fast option. However, note that our future implementation of Fast gaussian will aim to match the accurate version, so the difference will remain, the realtime compositor being more correct here.
Thank you for your quick reply. Blurs are costly in the viewport, is there any recommendation about the cheapest type of blur? I’m not seeing much difference between Flat and Gaussian, I’m guessing only the blur size matters?
All types of blur are identical cost-wise, except for the cost of computing their weights which is negligible and cached anyways. The Fast Gaussian mode is supposed to be orders of magnitude faster than other types, but it is not yet implemented as I mentioned. That’s because our implementation suffers from numerical instability, making it unusable for high blur radii, so we need to do more research to make it work, which I haven’t had the time to do yet.