Real-time Compositor: Feedback and discussion

I would like to point out that if there is a proper making system in place, a good competitor wont need nodes like glare flare bloom and glow.

The compositod can build the above mentioned effects through nodes in there own.

One of the most important things to compositing is masking and dealing with channels.

This really is the heart and lung of compositing, or if you like the engine.

With out masking everything here is just a filter system, similar to GMic.

Just a heads up and friendly reminder on what professional competitors really need at the most basic of levels to achieve great results.

Straight up good solid effective standard masking systems.

4 Likes

Reading through here and looking at what been done it seems like the compositing is taking the approach:
Asset to camera to render to the compositor to render.

Does the data really need to flow in this direction.

What if the final render was built as

Asset to compositor (effects,making and materis) to camera to final render.

This means there is more flexbilty for the user.

It also means the user needs to understand how to hook up 3D renders to look like 3D renders.

Which the community is already very well aware of.

How does the data flow at the moment?

As I understand this, again, I think that those “GMic filters” such as exposure, bloom/glare, distortion, etc. should be part of a brand new Camera Nodetree, where you can simulate advanced lens physics, and whatever you expect to be able to control on a real Film Camera.
From there, as it would happen in real world, the formed image is passed to the compositor system, where masking, color grading, and whatever you want can take place.
I know it would be a huge job, at least in terms of workflow and compatibility, but I’d like to see this happen some time in the future.

8 Likes

i know for a fact that the aftereffects “glow” filter is one of the most used effects in almost every professional anime production since the mid 2000´s.

of course you can build a glow yourself easily but if it comes at reduced performance or just extra workl its not worth it , professional compositing programs all have their fair share of common effects for easy use

no you want those effects in the 2d/uv space of the compositor/editor! a camera module should just be an addition.

Can you provide some explanation?

maybe i can find a good example instead , but thats how compositing worked up until now. its much easier to handle masks and layers than it is to handle it in a camera module outside of the compositing workflow.

that would mean , grading , composting, and other post work is still done in compositior but just select things are handled outside of it (in that camera module) , a compositor wants artistic controll over it not realistic handled by camera algorythms. that is too limited to create high aesthetic visuals. movies are all about magik/visual trickery , not about realism. if youve ever been to a movie set, theres so much unrealistic lighting for example. but it looks great once captured to the camera frame(realistic lighting looks like a soap opera from the 90s). same is true in compositing , the bulk of most postproduction is highly artificial and just panders to the aesthetic appeal.

4 Likes

My proposal could live on anyway: everything done in the Camera Nodes would be a pre-composite step. You can do it there or in the actual compositing pass. Both world could live side by side. The plus of having an in-camera composite is (apart the “conceptual” side of it) that you would be able to have different settings for different cameras. Also dedicated nodes for camera could add features difficult to achieve in other ways (shake node, handheld, etc…)

3 Likes

Yep, Masking and being able to layer everything up is super important. Blender already has a lot of great 3D Tools for 3D space that could be translated into 2D space.

A great example for this kind of conversion would be the Array modifier. A great little bit of kit that has been developed during Covid is a new software for motion graphics called Cavalry. Check it out for free. Cavalry is very similar to Blender and what it can do, but in 2D with a lot more class than Blender.

Modifiers converted to work in 2D composting as nodes is something to think and write home to your mum about. This would allow artists to develop their own lens flares, and bloom, however, they want using nodes. Not having an artist locked into a single node to achieve a remedial effect such as a lens flare, (Rolling my eyes into the back of my head) is very 1990.

Once again to be super clear on this. Masking is incredibly important because without it, your stuck with just Filter @#$%ing (industry term)

Link for Cavalry:
https://www.youtube.com/@cavalryapp

2 Likes

These are animation related and we have all the tools to handle them already. Camera nodes could handle optical effets instead, but they’d be baked into the image and you usually want to avoid that because it’s destructive and best done in comp.

1 Like

Nope, it would be a per-camera comp effect. Not hardcoded. Think of it as if it was a nodegroup in the compositor

2 Likes

so i played around with the compositor a bit more and its pretty capable already. I made a dirty lens shader which reacts to overall brightness and blends dirty/scratched lens + some bokeh blur, an impact frame shader, my own bloom shader (its not looking as good or is as easy to tweak as the aftereffects glow effect though) and all working in tandem and not even a frame less in playback.
if we get a nice antialiasing module , an “anime pipeline” will be very achievable.

Thanx for the hard work ! !

2 Likes

Sorry to make requests, and I didn’t fully read the thread, but here are my thoughts:
Geometry has modifiers
Grease pencil has modifiers and effects
I believe that cameras should have a filter stack, similar to those. And that is where compositing should go. Like, they would have a Filter stack with a bunch of simple filters (Exposure, color correction, blur, etc), or a node-based setup (Compositing nodes) if the user wants to go advanced.
And each camera would have it’s own unique filter stack. It just makes sense to me.

11 Likes

Can each camera have its own output resolution as well?

3 Likes

That would be awesome, right now you can only hack your way around with scripting.

1 Like

I think that a lot of settings that are in Render and output tabs could be considered to become camera settings instead… Like color management, resolution, motion blur (Per camera ISO, Shutter speed, Aperture size), even frame range maybe (So the frame ranges of different cameras can overlap and output different files of different angles for the same frames), so on…
Resolution - most definitely.
I’ve been working on this arch viz project some years ago. Had to render the same model from various angles with many cameras. Some renders had to be one size and aspect ratio, others had to be different… I set up each frame to select a different camera, and I could render it all at once overnight with Ctrl+F12 and just go to sleep, but i had to manually input the resolution for each render frame, which really sucked.

An alternative solution would be to animate the resolution, not just camera changes, but resolution animation is disabled.
On “Blender Today”, Pablo said that the ability animate resolution was disabled because it would corrupt video files. But I think that’s a bad solution.
For starters: It’s highly recommended not to output a video file directly, and render each frame one by one as images, and connect them later. So, why have video output as an option at all? Instead, there could be a simple built-in tool to connect image sequences…
But if that’s not an option, the videos wouldn’t get corrupted if the resolution differences between frames are fixed with black bars.
And yet, a simple warning text for animated resolutions could be enough… The inability to animate resolution is a far bigger issue than the edge case of possibly getting a corrupted video…

7 Likes

Are you taking about something like render region?

if( output type is one of [all video formats] AND scene resolution has keyframes)
then { 
  show_user_warning(
    "Output Video Types are incompatible with animated scene resolution values.
    Videos need all frames to be exactly the same size"
) }

… however… I’ve used Lossless Cut to join together a dozen different video files of wildly different sizes and formats.
VLC on my laptops can play them just fine. Doesn’t work in VLC on phones/tablets/amazon firesticks.

That’s more due to desktop-VLC ability to digest almost anything. Videos with variable frame size are definitely non-standard

1 Like

is there any info if we can import LUTs into the real-time compositor soon and use them for even more real-time artistic decisions?

Thank you!

3 Likes