Compositor UI improvements

Just so you know, the file output type can be changed per file output node.

The setting can be found in the Node Properties in the side bar (Opened by pressing N) while a file output node is active.

3 Likes

Ah, well then, TIL you can modify Blender’s File Output nodes on an individual basis! I’ll give it a whirl and see if there are any challenges to it, but that seems to be the missing link. I was unaware that they had been decoupled from Blender’s main render settings.

I would assume it is smart enough that if the composite does not import any rendering layers, it does not waste time running the renderer, is this correct?

IMHO a solution for the multiple files is for the file-write node to have an output, which is actually the same image as the input. However executing this node has a side-effect of writing the file. In your example you would put one of the file writes as the output of the other one, then execute the last one.

We are planning to tackle I/O issues in the compositor after the two larger topics we are working on right now (see roadmap). This includes a coupling between the compositor and the render pipeline, though not exactly in the way you describe, see for example:

4 Likes

Indeed, if there is a compositor in the render pipeline, only scenes references by the compositor will be rendered. If none are references, none are rendered.

I am not sure if that fits in this thread, but would it be possible to have exposure and gamma sliders in the image editor ? Just a display thing, it can be useful to quickly check the values of the pixels on a depth pass for example, or overexposed areas of an image.

2 Likes

In principle I think it makes sense to have image transformations for viewing only, but I wonder why you want to use exposure and gamma instead of a regular threshold for example to see overexposed areas?

Ah yes it could be a threshold, it is just that I am used to get a exposure slider in other DCC. I would say the advantage of having an exposure and gamma also allows for non destructive testing. But either way sounds good to me.

2 Likes

I would also love to have this. It’s very useful so see how much range you have in your image. Kind of what you can do in the viewport by tweaking the exposure and gamma in the color management settings. :slight_smile:

2 Likes

I see, yeah that all makes sense, thank you!

On the original topic – I would say that any high-performing compositing application would want to re-process pixels as little as possible, so having multiple outputs would ideally mean that the pixels would only be processed once to RAM, then compressed/resized/saved to the various outputs from there. The same optimization logic should apply to processes that can be concatenated, so that any operations that are destructive (pixel filtering in a resize, for example) can be combined and run once. I don’t know if the Blender Compositor is already doing such optimization, but it’s something that gave Shake and Nuke a clear advantage over other apps in terms of maintaining performance and quality.