Real-time Compositor: Feedback and discussion

@ [OmarSquircleArt]

Do we have any chance to add/implement these nodes to realtime compositor?

  1. A matrix multiplication node
  2. A matrix inversion node

According to this need in this talk, Filmic as default is hurting the experience for people - #252 by kram1032?

I also wonder if is there any future plans to add vignette node with different modes (instead of manually creating vignette)

Really the ideal version of this would be an entire matrix-type socket, so you could pass conversions along rather than selecting from a list. Might not be in scope though.

One thing Iā€™d especially love is far more extensive node-parity between geometry-, shader-, and Compositing nodes.

Like, for instance, why isnā€™t there a Vector Math node for compositing?
Some of that stuff can be done with Mix nodes, but quite often I find myself splitting up a color to apply the same exact math function to each of its components and combine later. I.e. exactly what a Vector Math node would do in one.

And if there were a matrix socket type, that would no doubt be brilliant to have in both the compositor for advanced color space manipulations, and geometry nodes for advanced 3D space manipulations.
Also useful to have, especially for a future with Blender working in larger working spaces, is

  • every node, in the compositor and shader nodes alike, that involves an RGB value output or color swatch, should have a dropdown menu like textures do, selecting how that RGB value is to be interpreted. That selection would then always be converted to the scene linear role accordingly.
  • trickily, the associated color picker ought to probably display colors in said space so you see what you get. Gotta colormanage accordingly.
  • for similar reasons it may actually be nice to have the colorspace conversion node available in shader nodes as well.

Anyway, in terms of colorspaces, what Iā€™d really like to be able to do is:

  • select three primaries and a whitepoint in XYZ space
  • get in return a pair of linear transformations that go XYZ ā†’ RGB based on those primaries and the whitepoint, and RGB ā†’ XYZ

The math for that can be found here:
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html

And if we had matrix inversions and matrix multiplications within blenderā€™s various nodes, this could be built manually.
Right now you could technically build this by calculating the necessary matrix outside of blender and either registering it with OCIO (thereby making it available in the Colorspace Convert node) or manually building a monstrosity of nodes that do the exact matrix multiplication entry for entry.

Option1 would not be dynamic, so you canā€™t adjust creatively on the fly what you did
Option 2 is technically dynamic, but in a rather useless, unintuitive, and likely slow way

With some basic matrix manipulation functionality, you could basically completely rebuild everything AgX does from within the compositor, allowing you to make a dynamically adjustable version of AgX as a custom color management / grading chain inside the compositor

5 Likes

Looking into it in the report in the tracker.

We would probably need some reasoning on why those nodes might be useful, but regardless, this is outside of the scope of the realtime compositor. But finishing the realtime compositor will open the door for improving the feature-set of the compositor due to better maintenance, so patience. :slight_smile:

This is planned some way or another, as mentioned here: Source/RealtimeCompositor/UserExperience - Blender Developer Wiki.



@kram1032 @MC-Blender Let us just focus on getting the realtime compositor done for now, then look into that. As mentioned, finishing it will open the door to further and bigger improvements through better maintenance.

4 Likes

Yeah, figured this was out of scope for nowā€¦
Oh well

This project nevertheless has already been hugely helpful:

This compositor node tree was waaay too slow before some of the recent optimizations. Taking multiple minutes for stuff that now takes seconds to at most a minute. Sometimes failing outright due to memory demands

Hi everyone,

Regarding the issue of limiting the compositing space to the camera region in order to maintain an aspect ratio that is identical to the final render. I know this have been causing inconveniences for users and it was mentioned multiple times before. So I was thinking of trying a temporary solution to alleviate those inconveniences until we implement a proper solution.

Our initial thought was to limit the compositing space to the pixels inside the camera view when the camera has an opaque passepartout. So the Render Layers node would return an image whose aspect ratio is the same as the aspect ratio of the scene render.

While thinking about that, however, I noticed some ambiguous cases. The one that comes to mind now is as follows. What should happen if the camera region intersects the viewport? For instance, when the camera view is partially out of view:

What should happen in this case? Should we return an image of the same aspect ratio and fill the hidden part with a black color or an alpha color? Or should we just ignore this case and return whatever image is visible regardless of its aspect ratio.

3 Likes

I think you would want to maintain the aspect ratio of the selected camera in its output, even when part of it is occluded. Otherwise you can get some unexpected behavior when different aspect ratios are mixing, whereas the returned image with black pixels makes visual sense.

7 Likes

Ahh thank you for the link about vignette, I wasnt aware of that, i read it and its really nice. Thank you

(Yes. I just wanted to mention that nodes if maybe they are so simple to add since they are more like mathematical nodes. But Yes. The people who work on Filmic or AGX or view transform things can probably explain the reason of why its needed better than me.)

Thank you for the feedback, I agree, finishing first realtime compositor is more important for now. Its already a big leap : ))

2 Likes

I agree that you should maintain aspect and viewport centre. I have been using the directional blur and found it to be sensitive to differences in viewport vs render output. This is especially obvious when I am zooming the viewport to investigate the composite.

I tried the paspartout hoping that it would limit the screen space effects to the visible region so that logic is sound. However in the long term you may want to use overscan for viewport effects such as blurring, which means that passpartout results will change.

3 Likes

@OmarEmaraDev when I am playing the video I compose in viewport my system memory become quickly full. and I have to manually change view3D area to sequencer area for the memories to be freed. is it not a job of the realtime compositor to free my memory when neccessary?

Iā€™m using a laptop which ram is 4gb.

It should free memory when it is no longer needed, yes. Though most of the memory allocations happen on the GPU, so maybe your GPU driver is swapping due insufficient VRAM?
If you think there is a memory leak somewhere, feel free to report a bug so that we can investigate it.

1 Like

@OmarEmaraDev

Hi,

Any progress on real-time-compositor on intel macOS 13 Ventura?

You can follow the progress in T96261.

3 Likes

I was under the impression that cycles viewport render in camera view was rendering the entire camera even when I was zoomed in very close on just a corner of the image. Does anyone know if that is the case?

Hmmm, interesting. I was only testing with Eevee so that I could have realtime playback.

Hi, I use Render Region in Output Properties and set Start Sample=Max Sample.

Cheers, mib
EDIT: Just saw you are ask for EEVEE, I am sorry.

This ā€œImplement static cache managerā€ rB85ce48829819 sounds intriguing. Is there a chance that this can help in the non-real-time Compositor, too?

I donā€™t think this is the case. Cycles will render the visible viewport area, limited by the border render and camera passepartout, nothing extra out of the view.

It can help, albeit to a lesser extent, because the realtime compositor is mainly minimizing GPU<->CPU transfers and VRAM usage, which is not much of a concern for the CPU compositor.

3 Likes

It would be very helpful to have a ā€˜Is Viewportā€™ node (just like the ones in Geometry Nodes) that could be fed into the Switch node. This would let users set up workflows that automatically behaved differently in the viewport vs the post-rendered compositor tree.

9 Likes

Would this āš™ D16531 Compositor: Add usage option to Composite node be satisfactory or do you think an Is Viewport node is still necessary?

7 Likes

This seems to be coeherent with the Eevee/Cycles dropdownmenu you find in material outputs, and should do the job

1 Like