Blender Support for ACES (Academy Color Encoding System)

I’ll respond just to this, and ignore all the personal attacks and namecalling.
Too much mystery here. Either it work or it doesn’t.
Tell me exactly what I should select in the Color Transform. It doesn’t have that many options.
Bypass the input transform in the color management. Then in the OFX I have:
Input Color space:________
and Input Gamma: _______
the other option are generally matter of taste, but if you have recommendations feel free to make them.

And you said the TIFFs are the way to go. What is the input transform for tiffs then? Because TIFFS are no longer linear, they get affected by the transform.

It shouldn’t be that difficult to just outline a working workflow. Without name calling or insults if possible.
And, last, if ACES is supported, how do I export an ACEScg linear file then?

1 Like

The OFX node allows one to take a linear EXR of Resolve’s “supported” encodings and change them as required. So if you are Arri LogC / AWG, the “Input Colour Space” would be AWG, and the poorly termed “Gamma” would be Arri LogC.

But again, that will only manually load the footage correctly, and you would be responsible for taking the footage to the “Timeline Colour Space”. Resolve only works on display linear at best, and that is based on a BT.709 assumption.

You can easily test and verify this by testing a comp of a blurred red shape over a fully emissive cyan background. As of last testing, only BT.709 as a working timeline space linearizes to display linear for compositing.

Fusion standalone is far easier here. But again, they are two different applications with different assumptions.

Filmic Log TIFFs are indeed nonlinearly encoded, but that is fine because Resolve doesn’t exactly work linear. It’s just part of its history. For a composite, if the goal is to composite footage, Fusion standalone would be the way to go, and you can simply linearize Filmic Log to linear using the inverse transform and an OCIO node.

I can only post and try to explain things so many times. At some point either someone says “I still don’t understand” and I keep trying to figure out what that is, or they hand wave and make appeals to authority. I am, as I’ve said and demonstrated countless times, happy to try and help the former case.

If you install the canonized ACES configuration, with the appropriate Blender tweaks, the output of an EXR would be linearized AP1 primaries. That is manageable from within any OCIO enabled compositor etc. And of course you would be left with what amounts to an unmanaged output because plenty of your pixels would be out of gamut. And then on top of that, you also end up with posterized messes due to the overall design.

If the work is Filmic based, because Blender doesn’t have fully managed file encoding, the sole option would be Filmic Log in a TIFF or like format, and then linearize the 16 bit TIFF in the compositor using the inverse transform via OCIO.

The latter works and has worked for over half a decade. The contrasts are easily applied downstream on the Filmic Log.

Ok, we may have the source of misunderstandings here. Looks like you have a very old picture of Resolve and its color science. The latest versions of Resolve work internally in another colorspace that’s not display related, nor based on 709 primaries, it’s called Davinci Wide Gamut/Davinci Intermediate, for the Colospace and the “Transfer function” (to not calling it gamma and making you angrier…) respectively. It’s Blackmagic’s attempt to respond to ACES with their own Color Managed environment, and –of course– they claim it’s far better than anything that humanity has seen so far in terms of CM. I lack the knowledge to judge or back that assumption, I just know if I am able to use it effectively in production or not. I still couldn’t. All standard cameras footages get mapped very nicely, but I ran into all sort of issues with digital 3D footage, Fusion comps (although, I recently arrived at a working solution to map the Fusion comps) and (the worst) effects, plugins and powernodes unaware of the new workflow.
There’s a small brochure outlining it here (I bet there’s more in depth data someplace else):

I assume that you will not like the DWG/DI… just by linear extrapolation of what think of all color management, but what I want to do is get the EXRs there. Or to ACES, but maybe that’s not possible for what you say

And you tested the compositing?

Give it a try with the red blurred square and the cyan background. You’ll find it has a very peculiar behaviour in 17, and is very non-uniform in response.

As best as the folks who I know who have tested it, it is a very convoluted backend that doesn’t behave consistently, or at the very least, is far from straightforward.

But again, feel free to try it with imagery that demonstrates linear versus nonlinear compositing, as that’s the best method.

I still stand by that it is far easier to control and have access to OpenColorIO nodes with Fusion Standalone.

btw, I installed this version of ACES, using the content of the 1.1 folder.

I don’t know if as of today that’s the version you approve. I guess it’s the one people (meatheads?) out there is using.

You mean something like this? (that’s 2 Solids, precomposed, and blurred in Color page)
In a Davinci Wide Gamut project:


The same test in an unmanaged YRGB, Rec709/2.4 old school/oldscience Resolve project:


I have to say that to my meathead, untrained, gamma corrected, unworthy and flatearther eyes, the Wide Gamut version looks way better, both in the preview and in the scopes. Wrong answer? Do this have some clue as to how to place Linear EXRs the right way?

Jason (JTheNinja) had a variation with the appropriate Blender needs, including the appropriate RGB to XYZ matrix and proper coefficients. I can’t remember where it is.

The DWG version I believe composites in display linear correctly, like their older BT.709 YRGB chain did.

I believe though that not all operations flip flop. Again, it’s erratic. I also don’t believe that their pseudo ACES chain works correctly, but that was tested in the early betas of 17.

I would test everything against a ground truth like Nuke or Fusion before assuming it works properly. It was inconsistent in the betas, operation depending.

Also note that Resolve is per channel, so if one want to hold the chromaticities, the versus tools are the most useful options in Resolve. Otherwise it’s skew city. Baselight on the other hand, doesn’t operate in the RGB stimulus domain, and is far more properly managed.

Thank you, i’ll search it. The one I have installed does not export Linearized AP1 EXRs definitely.

They stated last year thar they “Fixed incorrect matrix values”. It may be working better by now.

It should. Assuming you feed it AP1 lights, it will spit out AP1 light buffers. But again, gamut problems and tonality mishaps abound.

I am pretty sure it is due to their internal pipeline, which is being retrofitted, hence there are plenty of kinks and warps and weirdness. Hopefully BM won’t discontinue Fusion standalone, as ResolveFusion just isn’t anywhere near useable yet.


One image is out of blender with the official aces config set to aces and 709 for look
and the other resolve. The resolve one has no adjustments to sat or exposure .

Resolve color management was set to acescc with ap1 and output set to 709

the exr input transform was set to acescg.

choosing acescg as the input transform seems to give me and exact match (no ofx, no bypass)

using the color transform ofx with ap1 and linear gives me a great match too (under advance, only turn on white adaptation), but not exact.

Just tinkering with this because dealing with all the problems helps me figure stuff out.

edit: using an uncolormanged workflow and just using the ofx node to handle conversions I
pretty much got the results I wanted by setting the tone mapping option from davinci to
luminance (I need to test diff images) (of course all of this is with image rendered with aces
config)

Do you have any luck finding it? I did find the tweets, but not the actual config. Still trying to look for it, I’m not even sure whether Jason has actually ever posted his config.

1 Like

There’s no “winning”; log encodings are an encoding which are not suitable for display. It’s a light encoding, which means it must be suitably decoded appropriately and prepared for display.

Imagine looking at the bitstream of an MP3 using a text editor, and saying that some song “wins”. That’s what you have done here.

In terms of “correct” outputs, using the canonized rendering, there’s another rabbit hole there. See the trending to cyan outside the window? That’s not “winning”.

4 Likes

@troy_s He’s not using a log encoding, he’s showing a render of his scene (which is intended to be displayed).

However, for direct rendering ACEScc is wrong, he should be using ACEScg instead.

@nickonimus can you try the filmic version with a medium-high contrast or a high contrast look please?

Nope :frowning: I searched here in devtalk, I searched github, and even blender branches and couldn’t find anything.
But I was afraid to ask Troy again, and that suddenly the pixels in my monitor turn into a digital fist that could actually punch me in the face. The guy understands these colorspaces thingies in a way that maybe He can command all monitors in the world at Will in ways we cannot imagine…

Now, seriously, if there’s an ACES profile that’s –to some extent– “TroyApproved” It would be great to have it. I was able to bring most of my camera footage to ACES successfully, so it would be great to have Blender EXRs too.
I still think that an officially-supported-dropdown-accessible-no-web-hack-hunting ACES support would be great.
Maybe we can DM Jason or tag it here to at least have his version?

The problem is that the default ACES configuration is a sloppy shitfight mess, beyond being a smouldering dumpster fire of image mangling.

From the Blender side, the luminance coefficients and the RGB to XYZ role needs to be defined correctly. There was a configuration out there that did this, but to be honest, ACES is so poorly designed it doesn’t matter; no one will likely see or know the difference because it mangles everything up so badly.

So short term, just use whatever ACES official configuration you find. It just doesn’t matter.

1 Like

Thank you Troy.
And just to know. Is the spectral rendering branch a step in the right direction in that sense?
I mean computing the render as ONE wave that gets decomposed into RGB after render, just like in a camera sensor sounds like more accurate to me (in my ignorance). Maybe it opens the door to better color management in a broad sense? Or at least a more accurate rendition of colors based on the light?

Something fitting here I believe

Spectral rendering is clearly the future.

However, getting light data out and into an image is an even larger problem there, one that has yet to be “solved” with even BT.709 / sRGB based lights! It’s an open field seeking solutions. Nothing has “solved” this yet, with most approaches stuck in the bog of digital RGB gaudy looking output.

This is a fantastic observation that leads to a subsequent one; digital cameras are vastly different to what we had with film cameras. Digital cameras capture light data. Film cameras formed imagery.

That might seem like a foolish semantic twist at first, but once you dip your toe into the background, it’s sort of a mind ripping observation.

Light data is just what it says… it’s just boring emission levels. A digital sensor captures spectral light, and then some crappy math is applied to modify the levels so that they create a math-bogus-fit set of lights that sort of kind of almost will generate a stimulus of the light it actually captured.

But in the end, it’s just dull light data. It’s worse than a render using RGB light!

The problem is still turning that light data into an image. And again, this is where literally all digital approach absolutely suck, and are virtually all similar to identical.

Creative negative print film on the other hand, took spectral light data and transformed it into a fully fledged image. Spectral light would degrade the dye layers along the entire range of the medium, leading to tremendous tonality rendering. This is the unicorn I’ve been chasing for years!

Compare the two stills from the How Film Works short, which is a pretty solid little introduction.

Ignore the boring “hue” differences, and pay close attention to tonality.


Notice how in the digital RGB sensor capture, all we have is a varying of the emission strength, and the chroma ends up just a massive wash of … similar chroma. Compared to the colour film, where the chroma is degrading along the entire run of the stock, we get huge tonality due to the ability to degraded to greater levels of “brightness”.

Look at the helmet! The differences are striking!


Which “image” shows the detail and nuance of the helmet surface better?

So in the end, we have two radically different mediums, one of which, to this day, simply destroys our current digital mediums because of a lack of focus of engineering of light data to an image:

  1. A _fixed_filter set of varying emission levels.
  2. A variable filter set with fixed emission. The variable filters vary emission and chroma.

We should have been looking at image formation, not colour management.

I’d argue the opposite; trading off the physical facets for a high quality image.

Nowhere in that disengenuous document is anything to do with image formation, which is unsurprising why after a decade, it is a mess. M. Uchida’s early work from Fuji has long since been left behind. It’s embarrassing.

Much of the grandiose claims made have been countered rather strongly by industry veterans. ACES_TCAM - Google Presentaties

2 Likes

Yes! Totally agree. In fact, as videographer I put a lot of effort in giving back some reminiscence of film look into the sort of plastic looking digital camera footage/photos. To at least recover some of the richer response curves, the grain, and halation of analog film and photo.
This may be becoming too offtopic, but one of my “dreams” for Blender is to be able to have some sort of virtual simulated cameras, that take the spectral data, and creates the image emulating what cameras do. I mean, the render is spectral, the light passes though a lens, the iris, and hits a virtual sensor that may emulate either a digital sensor, bayer pattern included (outputing a bayered raw), or a virtual film stock, with stochastic film grains –based on stochastic sampling, not as a postprocess, with lens bloom, non-linear light and color response, even halation and uneven bokeh with interference patters and fringe. Even the human eye/retina may be modeled in that way, with diffraction from the eyelashes, and dust specs in the eye.
Luxrender has a few of these features to some extent, but as I understand It’s just a postprocess over the rendered image, it’s not embedded in the image production. But I may be wrong.

That’s a nice video. I worked in a film developing lab back in the '90s, it was one of my first jobs. I was photo retoucher there, correcting dust specs, and white marks in the prints. I mean full-analog-no-photoshop retoucher armed with a small pointy brush and Kodak photo retouching paints over the final paper print. Also there was no scanner in that lab, prints were made the old way too. My job was basically mixing very quickly the color that the white speck/hair/scratch missed, and covering it with the same color. I see that video and I can feel the smell of the developing chemicals inside my head.

Agreed. The film version is more subtle and captures the delicate tone better. And has a beautiful bloom and halation in the specular highlight.
Still not all sensors are created equal. One of the problem with many sensors is that the RGB filters are “too perfect” meaning that they let a very narrow slit of the spectrum pass through. And that leads to all sorts of weird things, color-wise.
Analog film color filters were much “broader” –having a more “bell-shaped” response to frequencies more “natural” in a way, so they captured color nuances way better.
But some digital cameras have broader/better spectrum filters in the sensor and capture colors way better.
Here’s a video comparison –way offtopic btw, just to illustrate the point– between 3 cameras, the bmpcc, a GH5s, and the ZCam. It’s specially revealing what happens with the oranges in the BMPCC vs the other 2.
The bmpcc sensor has a problem resolving orange colors that’s evident around 13:08 and 14:00

Anyway. I get that this is not the only issue making digital photos so uninteresting compared to analog film. Just one aspect worth noting.

Is, then, a “physical-camera-simulator” something feasible in the near future for blender? (maybe this goes in the spectral cycles thread or even in another thread)

Thank you again for your detailed answer. And tell me if I’m being annoying.