Real-time Compositor: Feedback and discussion

aha, yes now that makes sense :+1:

Start by adding a declaration for the input in node_composite_displace.cc, then map the input of the node to the input of the operation in COM_DisplaceNode.cc, then follow the code in COM_DisplaceOperation.cc and COM_DisplaceSimpleOperation.cc to add the depth input socket in the constructor, retrieve and store the socket reader for the depth input in the init function, and using that reader in the various execution methods. Making sure to add any necessary members in COM_DisplaceOperation.h and COM_DisplaceSimpleOperation.h. Just follow the code and it should be straightforward.

2 Likes

Yeah, can confirm, on Macbook Pro, M1 Pro version, it crashes instantly when i go into rendering mode/view.

1 Like

Damn, you’re right, it does exactly what I need. I actually tried Convert Colorspace node before, but the wrong gamma threw me off. Thanks for pointing that out.

Actually looking back at your original post I think what you need is this:

It’s a bit strange to conclude it as “wrong gamma”, what does “wrong gamma” even mean? I hope one day most people can adapt to more proper terminology about transfer functions and encodings etc.

To most people gamma means the point between lift and gain, but it’s a catch all phrase, for example what you were talking about earlier I would call a display gamma correction curve with the gamma adjusted to make the image look as expected on the screen. By wrong gamma, I meant the gamma hadn’t been automatically amended to display as expected in the compositor.

The method you posted works well with Filmic but not with AgX, or any other view transform probably. But anyway, this is getting a bit off-topic.

It should work if you choose the same transform both in the node and in the CM setting. But yeah I agree off topic

It could be an intriguing thought to transfer some of the render panel options to be the compositor’s responsibility.
An obvious option is the Eevee glare option which I always thought was a little bit out of place.
But it could also apply to more general options, like color space, or maybe even some of Eevee’s screen space effects.
IMHO, the real-time compositor should be a first-place citizen inside blender.
Maybe every scene will initiate with a default comp tree(with color management and such), although I am disregarding performance consideration.

Are there plans to also speed up the standard compositor when it’s not being displayed in the 3d viewport? I noticed the lens effect node is still slow when viewing the result in the image editor or compositor’s backdrop for example.

Like a 2d version of the 3d viewport that doesn’t need to re-render the 3d scene, it just uses the already rendered render layer node (or movie/image file) in the compositor.

I can see using the convert color space node to do view transform in the viewport compositor is exciting because we will have post-view-transform grading in the viewport finally (the use curve in the CM panel operates before view transform), I am excited for it for this reason. But I don’t thing it is useful to convert to any other random color spaces since the view transform at the end expects the data fed to it to be in the working space.

From another angle, I am so excited about Cycles finally going to get real time glare by using the viewport compositor. People have been asking for Cycles reatime glare with no result, but this is another way to have Cycles glare in the viewport and this is exciting. Hope the glare node gets supported soon.

https://developer.blender.org/T99210

Description

The aim of this project is to develop a new GPU accelerated compositor that is both realtime and interactive. As a first step, this new compositor will be used to power the Viewport Compositor, which is a new feature that applies the compositor directly in the viewport. In the long term, the realtime compositor will be unified with the existing compositor, effectively accelerating the existing compositing workflow as well. This task is only concerned about the realtime compositor in the context of the viewport compositor.

4 Likes

we can store an AOV or the screen buffer to use the next frame

in the past I used this for FX that propagate (like distortion waves)

for things like ‘Boids’ we can can use this too

(many cool shaders on shadertoy / used for post processing use a buffer from last frame*)

1 Like
stored_screen = (stored_screen*.9)+(current frame*.1)


current_frame = (current_frame*.95 + (stored_screen*.05))

the stored buffer would be used by the compositor**

I guess that could be technically possible, but it is out of the scope of the project for the moment.

Yes, after a driver update, it does indeed work on my ancient (almost 10 years old) APU, at least in rendered viewport shading mode.
Does not do anything on my end, if I use material preview viewport shading mode, although the blogpost suggests it should.

That’s great to hear.

If all the nodes are getting rewritten, it’d be a good idea to have just better glare node. Meaning that it’s not same as the current one, just faster, but actual new node that can produce acceptably looking glares. The existing one is straight up unusable for most types of glare expected from production quality output.

5 Likes

Have you considered writing your alternative Glare node proposal up at RightClickSelect? This project appears to be about porting existing nodes instead or engineering replacement nodes.

I am aware, but seeing how clumsy and convoluted the current node is, I could easily imagine rewriting it into a GPU accelerated code being more time consuming than writing a simple and at the same time better glare node from scratch. If the current one gets rewritten, it may end up being a throwaway work. In fact, I’d be very surprised if the glare node ever ran realtime, if the implementation remains the same, just ported to shader code.

We would still need to port the existing glare node regardless for backward compatibility.
Adding new modes to the Glare node is planned as discussed in the UX document, though we will get to this a bit later.