At the moment there’s a real problem with colour management, because if you enable filmic for example, everything is affected throughout the software it seems.
For example if you try and use an alpha over in the compositor to give an image a white background, and filmic is turned on, then the white background will show as gray in the resulting image because filmic is affecting the alpha channel presumably.
Having a color management node in the compositor would resolve this completely, as you could plug it into the beauty only, and still keep the alpha etc as god intended (untouched by colour management).
Please download the .blend file where I’ve set up the compositor to show the issue:
You’ll notice that if a solid white background is added in the compositor it appears gray. The file which is output also appears gray. So color management does unfortunately affect all nodes, not only in the viewport, but also in the resulting file.
I think you’re right regarding the alpha though, as the png file which gets output without a background has the correct alpha. So the issue is that the colour management is affecting all nodes in the compositor, in this case making the white rgb node gray.
My suggestion is that the user needs to be able to control which nodes are affected by the colour management for proper compositing. In this case for example I would put a colour management node before the alpha over, so that only the render was affected, and not the rgb node.
I’m not sure if you’re familiar with other render engines such as Redshift or Fstorm, but it would also be great if the colour managment node could accept lut’s and have real time open gl based physical camera properties (offers better control and is easier to match client renders to previous provided renders when switching to blender). Here’s a link to redshift’s implementation:
I don’t think an include/exclude in colour management checkbox for each node would be the best solution, because sometimes you might want different colour management to be applied to different nodes. If you’re overlaying a 3d render over a photograph, and you want to make amendments to both seperately for example.
Use the middle mouse button to sample the image. On the left of the Image Viewer, are the scene referred values. The right display referred.
The emission data isn’t transformed until the very end at the display.
Again, your understanding of emissions is wrong. You are passing a scene referred value of 1.0 and expecting peak display referred output. 1.0 doesn’t mean anything special in a scene referred emission environment.
This is already built into the sophisticated OpenColorIO chain. Applications that do not support OpenColorIO such as the ones listed are far behind, not the other way around.
You don’t want a dynamic camera rendering transform for a plethora of reasons. Further, a camera rendering transform extends well past transfer functions (poor term “tone mapping”) and into plenty of other transforms upon the image data.
That isn’t to say that the compositor should not get a transform node, it is absolutely mandatory. Again though, what you are citing isn’t about colour management per se, but more fundamental model differentiation and what code values represent.
It’s an awful idea actually, as absolutely every single code value needs to be managed.
Yep, I fully understand that. But my point is that I don’t want the suggested “colour management node” to affect the screen, I want it to affect the actual pixel information of only the output of the node it’s connected to in the compositor, allowing for the pixel information of non connected nodes to be untouched.
This would mean I can keep global colour management view transform (for the screen) set to standard, and therefore not be applied to all pixel information in the saved image, but instead use a colour management node, or multiple colour management nodes to affect only specific compositor nodes .
Perhaps colour management node was not the best choice of name, as I think it’s led to a bit of confusion.
If I haven’t described the need very well, check the file I linked to above, and check the jpg output which has a gray background instead of white, due to filmic being applied to it as whole. I know I could keep it as standard mode and try and match filmic with the existing colour correction nodes, but what I was looking for was more in the way of the controls found in the other engines I linked to, and the real time tweakability (currently tweaking values takes quite a long refresh, and matching filmic isn’t as fast as just plugging in a node and selecting filmic or a lut).
This defies pixel management principles as it is described, but I believe that what you are experiencing is related.
For starters, it is fundamentally impossible to mix a display referred emission buffer with a scene referred emission buffer; one relates to a minimum to maximum emission within the constraints of a display, while the other exists more as a physically plausible light transport system.
So the idea of “I want to select peak sRGB display referred white and use it in Filmic as peak display referred white” is fundamentally irreconcilable.
What is required is a fully pixel managed pipeline along the entire chain. That is, for picking a display referred output value, you’d want to be picking from the Filmic encoding with whatever contrast. That’s a bit of complexity around transfer functions.
Beyond that, both colour and data encodings vary, and they too must be transformed. In some cases such as alpha, the visualization of the data must be isolated from the actual encoding.
See above. This is unrelated; the two different encodings are radically different. You wouldn’t want to take both to the display referred domain and composite either, as there are adverse effects.
The colour transform node however, is still a mandatory requirement, it just won’t help your issues here.
Try and forget display and screen referred colour management. I’m talking about altering the value of the stored pixel prior to colour management. So if I have a pure white value on one pixel, I want to change the actual value of that pixel before colour management is even considered…but only at the output point of specific nodes.
What I’m describing is no different from using a colour correction node. I just want the extra options as previously stated, so that I can mimic filmic for individual nodes.
I think it would be great to have a way to bypass Color Management’s influence from individual image nodes in the Compositor, just as Color Management is already bypassed for any Background Images used in the Camera settings. Occasionally you may want to use Blender to composite your scene-referred render (using Filmic) with an already-graded, display-referred background image. This is currently impossible to do. One must first export the render and then composite in another application (or re-import into Blender using the “Standard” view transform.
A more practical use for me is when I am rendering out a “beauty pass” (in Standard, or Filmic) but also need a UV pass (which works for me as Raw). I am pretty sure the data to deliver this information exists in a single render, but I can’t get to it as I’m locked into one View Transform or another for everything running through the Compositor.