Real-time Compositor: Feedback and discussion

Yes, I thought about that too, and it would technically work, but I’m not a fan at all, just because in a complex tree (which would be the type of tree that would benefit the most of that mapping behaviour) I think it would be very messy very fast, because the viewer nodes would scatter around and if would be really hard to scan. If they were much more obvious, maybe? But I’m skeptical even if that were the case.

When working in the geometry node editor, when there’s a kind of complex tree and I have to go looking for the viewer node… Oh man. It is a pain, because they don’t stand out at all. I would not want having to look for more than one of those. They would have to be neon colored for me to be kind of ok with it.

To be fair though, the Ctrl+Shift+Click workflow implied that you shouldn’t be really bothered to where the viewer node is… But still I think they could potentially create a lot of visual clutter, and you do want to see at a glance what nude is connected to the active viewer, and the current UI design colors makes it hard.

I´ve seen that passes are planned, is there any news on this? This will be game changing for various workflows.

6 Likes

In the future, we will have that option for all nodes. We will allow you to choose which input determines the size of the node, or it could be auto like we have now, or the render resolution. So this design will probably be implemented regardless of what we decide.

No news on that for the moment unfortunately, but we are close to reaching some of the other milestones, so hopefully we will pick that up next.

7 Likes

We need the Mist Pass to be able to achieve a distance fog effect? Or are there any tricks with currently available nodes?

I am not sure how we compute the mist pass, but you can approximate it by normalizing the depth pass, remapping it, and taking its power to some value.

I’m just thinking it would be cool to have like a constant distance fog effect in the viewport (similar to video games), without having to set up volumetric lighting. I think it would make it more intuitive working with environment design, because you get a better sense of depth from the gradation of values going from dark in the foreground to lighter in the background.

But it’s currently not possible to do it with viewport compositor, I guess?

It is currently not possible to have the mist pass in the viewport compositor. However, you can approximate it using the depth pass as I mentioned above, which should work in the viewport compositor, but only with EEVEE.

1 Like

Ahh I was just confused because it’s also called Z pass :smiley: Thanks!

Are you sure? Works also with Cycles like a charm :smiley:

Well, that is news to me …
Okay, maybe with Cycles as well, but ONLY when Overlays are enabled. :smiley:
Though if you disable overlays, the last visible depth buffer will be returned and will not update as you rotate the camera.

1 Like

Several of the pass names in the compositor don’t match the names in the property tab. Confusion is allowed.

Interesting discrepancy, what the viewport is vs what the actual compositor calculates. It seems like that multiply node is the culprit.

I am feeding the normalized depth aov to the multiple node

Can you share the file?

1 Like

I can’t share my scene but I will try to get you a scene to test.

Btw I am getting constant crashes with the file when I combine all the scenes in a sequencer scene. Does this look like a crash by the compositor? It always crashes when rendering the last frame (frame number 6). All the scenes have compositors enabled. I will see if I can prepare for a simpler scene for this too.


Stack trace:
blender.exe         :0x00007FF656E13B70  blender::gpu::Texture::update
blender.exe         :0x00007FF655514D90  blender::nodes::node_composite_render_layer_cc::RenderLayerOperation::execute_pass
blender.exe         :0x00007FF655514190  blender::nodes::node_composite_render_layer_cc::RenderLayerOperation::execute
blender.exe         :0x00007FF655431110  blender::realtime_compositor::Operation::evaluate
blender.exe         :0x00007FF655424400  blender::realtime_compositor::Evaluator::compile_and_evaluate
blender.exe         :0x00007FF6559D7270  blender::render::RealtimeCompositor::execute
blender.exe         :0x00007FF6559D6FC0  Render::compositor_execute
blender.exe         :0x00007FF655316990  COM_execute
blender.exe         :0x00007FF6559CAFB0  do_render_compositor
blender.exe         :0x00007FF6559CB760  do_render_full_pipeline
blender.exe         :0x00007FF6559CEB80  RE_RenderFrame
blender.exe         :0x00007FF655312A50  seq_render_scene_strip
blender.exe         :0x00007FF655310680  do_render_strip_uncached
blender.exe         :0x00007FF655313040  seq_render_strip
blender.exe         :0x00007FF655313200  seq_render_strip_stack
blender.exe         :0x00007FF65530FBA0  SEQ_render_give_ibuf
blender.exe         :0x00007FF6559CB980  do_render_sequencer
blender.exe         :0x00007FF6559CB760  do_render_full_pipeline
blender.exe         :0x00007FF6559CE2C0  RE_RenderAnim
blender.exe         :0x00007FF65583C420  render_startjob
blender.exe         :0x00007FF654DBC7B0  do_job_thread
blender.exe         :0x00007FF654DE9190  _ptw32_threadStart
ucrtbase.dll        :0x00007FFCCD466BB0  recalloc
KERNEL32.DLL        :0x00007FFCCE5D53D0  BaseThreadInitThunk
ntdll.dll           :0x00007FFCCFAE4830  RtlUserThreadStart


See the difference please

.blend
_18052024_2020_33.zip

The Depth pass is not expected to be identical between Viewport and Final Render. It will be once we add passes support, but for now, it is a temporary approximation.

3 Likes

For future updates of the glare node, I remembered about this video.

I assume that the FFT Convolution Bloom approach talked about near the end is the one that the glare node uses, and the OP talks about how you can use a custom kernel shape for the bloom. Something to look into in the future?

3 Likes

Yes, this is what the Fog Glow glare mode uses, and yes, we shall replace it with a more physically accurate kernel in the future.

5 Likes

Hi, is it possible in any way to have inputs from compositor to scale and translate like the camera view/render region?

Here I tracked footage and I’d like to zoom in and see if everything aligns, color correct, pick values, etc
Similar behavior that we have when using the background image in the camera

thank you very much

This is a known issue at the moment. In order to fix it, we need to introduce the concept of data and view windows, which is hopefully in our short-term development plan.