Blender Support for ACES (Academy Color Encoding System)

Do you mind explaining the steps you take in Blender and in Resolve for your ACES workflow?

Sure, although currently I mostly use other applications for the final renders, but in the future I would like to keep it all in Blender (it would simplify so much!).

If an ACES OCIO config is set in your environment variables then Blender will use it. Make sure your textures are set to the correct color spaces (something like srgb_texrure for color or raw for data) and render an exr (the ACES convention is 16-bit half-float RGB). There shouldn’t really be much more to it, it should be like working in filmic currently just more standardized across applications.
Bring it into Resolve, set the clip input color space (probably acescg) and make sure Resolve is set to use ACES in the project settings (generally acescct to work in and output sRGB). Then just do what you normally do in Resolve to get the final look you’re after. You could even export your look as a lut or something and apply it as a look in your config file, so you can preview even closer to the final result in Blender (can even use variables in the config if you want to set per-shot looks, which is common in real productions that get looks from set, although I’m not currently doing it). End of the day what’s nice is all of it is really just for your viewport experience and getting the best preview to work with that’s consistent across applications (so you can properly light and color things and know what you’ll actually get in the end), the exr you render is still the same regardless.

So in summary it shouldn’t be much more complicated than rendering exrs like you normally would, and the viewport will display with the ACES RRT applied so you can work knowing how it will look in Resolve also. A good simplified config file that comes with Blender and is designed with names Blender uses would be ideal, but right now everyone is either doing their own thing or using the full default one which is just awful to work with. Having to go through hundreds of textures and babysit their color spaces any time the config changes is not ideal, and it’s even worse when you can’t actually see all the color space selections on the screen and you have duplicates with ambiguous madness (the generated ACES config REALLY should not be the default config users are expected to use, hence me coming here and requesting Blender be the one to update their default config to add an ACES alternative to filmic that works seamlessly).

The current vfx reference platform is OCIO 2.0 and ACES 1.2 with 2.1 and 1.3 respectively for next year, so things are moving along and getting ironed out. Like I said I’m pretty sure they’re including built-in ACES transforms in OCIO now, so you really just need a good config file. Other renderers like Redshift and Octane have already incorporated an ACES default with OCIO 2.0 and the Redshift config is already using the built-in ones.


Just to make things clear ACES is not trying to be some perfect color space, that’s not the point. The point of ACES is to be a STANDARD color pipeline, and with that comes compromises. Those compromises are necessary to keep ACES as simple and transparent as possible while covering all the major bases.

We could argue all day of what an ideal color pipeline looks like (and based on the length of this thread it looks like people have), but that’s missing the point. ACES was designed with its compromises in mind, they are intentional. Sure storing 16-bit half-float linear RGB is not the most efficient way to store color, but it is unambiguous and ubiquitous. Sure they could use something more efficient like YUV, some integer encoded log, but those decisions would add complexity and ambiguity. 16-bit in theory has plenty of precision as long as you’re not abusing your values in the grade or using finicky luts, it’s 10-bit with a 5-bit exponent.

The RRT is not meant to be your final “look” (unless you want it to be). It’s more like having default safety rails to make sure you’re not viewing clipped values and always working under some standard film curve. Typically I go for a film emulation type look, and I would make sure it looks the same with or without ACES, but I still work under ACES and the RRT rather than completely going off and doing my own thing because working under a standard is generally better. Even without the look applied you’re at least in the ballpark of something you can work with and you have the input and output benefits of the ACES standard being standard.


This is completely false.

It has everything to do with not being a management system. It doesn’t work.

This is also completely bunko.

The AP1 primaries are a stimulus based specification, and literally are bogus non-existent rubbish stimulus values that have no basis in real radiometric electromagnetic radiation. RGB rendering is stimulus rendering to begin with, and will always attenuate vastly differently to a closer-to-spectral model. There’s simply no comparison.

And the problems with ACES are far greater than simple gamut concerns. The very mechanics are broken, and again, it isn’t a colour management system as a result.

This is more rubbish.

The fact that the working stimulus space is vastly larger than the destination is indeed a solid chunk of the problem, compounded and made worse by per channel lookups that distort the stimulus entirely.

While important, it’s only a portion of the broken output that ACES delivers.

It is saying that the creative film response informed a century of subtractive based image making, building atop of thousands of years of subtractive based painting. If one doesn’t quite understand the difference between additive stimulus projection and the mechanic that forms the image versus the subtractive model, it is worth looking into.

Nope. Plenty of the dozens of problems with ACES were accidents. Follow the history.

Worse, it imbues work with a rather hideous digital RGB based aesthetic that is entirely unavoidable without doing as the majority of things that “use” it do; invert the output transform in an attempt to negate it.

For more information, it’s worth reading Chris Brejon’s piece. It specifically covers how the number of productions that have used ACES is completely erroneous. Further, some studios mandate an ACES interchange, and as such, great effort has been enforced to work around it.


I mean it DOES work and plenty of high end studios use it or variants of something similar to it. I would be curious what exactly “doesn’t work”.

The AP1 primaries are at the edge of the color locus which represents the response to pure wavelengths, so yes the AP1 primaries can be represented as pure wavelengths and no other combinations of wavelengths will give the same result so effectively that’s what they are.

A pure spectrally based color science is currently completely infeasible and is pretty pointless to talk about, and would make no difference (at least for representation of final images) for petty much every existing display type. For the rendering itself, sure, there is a case to be made for working spectrally. But there are currently no standards for how to handle that and it’s well outside the scope of ACES to define what that kind of change would look like. ACES is primarily a pipeline for interchange with existing software and color science, defining it spectrally would break compatibility with pretty much every software and require completely new ways of representing color digitally (almost all software, including renderers are RGB, and even when they represent more wavelengths internally they’re still mostly taking RGB input, interpreting that, and creating an RGB output).

CG work is also not the bulk concern of ACES. CG work is an optional part of the film pipeline, and the bulk of production would benefit even less from being spectral while making everything much more complex. Like I’ve said ACES is an effective compromise that is meant to be practical given the current state of things, not some “perfect” ideal.

You only very rarely encounter colors outside of sRGB in the real world, mostly coming from light sources with narrow bands of wavelengths, which tend to be rare unless you’re working with lasers for some reason. Even my example with LEDs is unlikely because even LEDs aren’t super narrow wavelengths. So while AP1 is indeed larger, in practice you should not be working anywhere near the extremes in the first place, and if you are that’s on you. But if for some niche reason you have to, some basic gamut compression should be enough in most cases, and I even think new ACES versions are doing some of that. If you have an actual example (done properly) that counters that I’d be curious to see it.

I’m already familiar with the Chris Brejon piece and it still does not discredit the value of ACES. He has an insistence on how colors resolve to white, but that also requires more complex color transformations. The RRT is not to everyone’s liking, but yes if you want a very specific look then go ahead and do it with the inverse RRT, no one is stopping you. Or if you’re really that strongly opposed then use something else for final output, not everyone is going to be happy.

But for the most part standard criticisms about the RRT (like the contrast and shoulder) can be resolved in the LMT. The RRT is not the end all be all look, it’s more like a safety and standard and in theory should have minimal “look” (obviously people will disagree on that). The look itself is up the the artist but that doesn’t remove the fact that there has to be something to compress the wide range of scene values, and the RRT has plenty of reasons behind it. We could argue about the RRT all day, there’s a level of subjectivity to it, but it’s not really meant to be your final look anyway and hyper focusing on the look of it is sort of missing the point of ACES.

1 Like

It really doesn’t.

As in it ensures no consistency at all across devices. What exactly is the point again?

And again, it must be noted, that of the productions listed, many TDs have clearly stated that the output transform side is never used. Not rarely, but essentially as small a number to be insignificant error.

They all try to invert, which is ultimately impossible.

This is also why Filmlight, and several larger post houses are in the development of an attempted 2.0; it doesn’t work and they want to escape from under the errors, but due to higher level studio insistences on “an ACES” chain, they cannot.

It adds a specific digital gaudy look and it isn’t invertible to negate the influences of the fundamental mechanic.

This is unequivocally false. They are actually beyond the stimulus of pure spectral stimulus under 1931.

Further still, it is a stimulus encoding. There’s no real relationship to actual light transport. It’s a hack. So further discussions are moot. Is RGB stimulus light transport models good enough? Sure in some contexts. But suggesting that somehow AP1 does some magic is false.

Worse, because of the extreme negative and low luminance, it ultimately leads to less colourful appearing renderings. This is due to the distortions and gravity of the primaries under indirect bounces, which push outward to the bogus primaries, and to extreme low “brightness”. The general result in terms of sensation from that stimulus is less colourful.

Also false. Spectral rendering is actually a real thing that even a few folks here have managed to pull off, and the energy attenuation is absolutely striking.

Again not quite on point.

It is simply another knocked off per channel curve. Seriously nothing more. And it comes with all of that broken baggage, plus many other problems.

You are surely trying to kid here? You realize that film had a wider gamut than sRGB? That’s a hundred years of creative image work.

At any rate, constricting a gamut to some smaller range of stimulus to work around glaring faults in a an asstastic protocol is up to whoever wants to choose it. Go nuts.

A few points:

  1. Literally every stimulus mixture that cannot be represented at the display or output medium becomes device dependent. This is the antithesis of colour management, even in the loosest and most silly of definitions.
  2. The basic mechanic of per channel causes stimulus mixtures to collapse to digital primaries and compliments; the entire nuance of the range of values becomes distorted, and the imagery ends up like a preschooler twiddling knobs.
  3. The gamut volume / height is a catastrophe and cannot be negotiated with any look. Full stop.

Many, many other issues like this plague it. It’s absolute crap being rammed down image maker throats by a few studios to save a few bucks.

You should reach out to him and ask him if he uses it on his projects or if he would willingly do so.

It literally cannot be undone. It saddles every image maker with some garbage residue.

And again, false.

Worse, it is stuck in the open domain, and plenty of creative choices do not belong there.


You are deliberately misinterpreting things I say and missing the point. You also never bring up any better alternatives or standards, so I don’t get what your point is or practical proof of claims.

I never said AP1 wasn’t based on chromaticity, what I said was that the AP1 primaries are roughly on the edge of the color locus which corresponds to the response to pure wavelengths. They are designed to be very close to the edge of the chromaticity diagram (with the green and red edge running up along the side) and are very similar to the rec2020 primaries. Because of the way the tri-stimulus response works the wavelength values at the edge of the locus are actually not so ambiguous, if you wanted to interpret them in your renderer you could treat primaries on the outer locus as wavelengths.
It’s honestly a very minor point I was making to say AP1 should be wide enough for any RGB display while also roughly corresponding to wavelengths for rendering, if you’re so inclined (not that RGB rendering is actually that accurate, but we make many many more compromises in rendering anyway). I don’t know what other better color spaces you know that are instead spectrally defined with more than 3 primaries.

Very few production renderers are spectral, and even when they are it’s usually pretty limited and mostly handled internally. I believe Octane has 6 primaries internally, pretty sure Maxwell has 12, something like Lux or Indigo is not commonly used in production, and of the big houses I only know that Weta’s renderer is spectral, but the most common renderers by far are RGB. Even when renderers are spectral it doesn’t mean they’re actually being fed good spectral data, the data they get is almost always RGB and the texture pipeline isn’t about to change to spectral anytime soon. But yes in an ideal world everything would be spectral, but that’s completely beyond the scope of the conversation of ACES.

The main issue with ACES as opposed to other REAL systems of color management is that it has to make compromises on certain things to be a standard for the widest range of productions/houses.

As I see it there are two ways things could have gone
-the first is to favor the “look” being almost entirely in the LMT and keeping the RRT much more basic (and ideally even reversible). The problem with this first strategy is that without ACES would look horrendous by default in this scenario without an LMT. It would put all the pressure on the LMT to make a viewable image, and since the LMT is supposed to be modifiable it also can’t be made standard, and it would essentially boil down to the return to the wild west of luts like before.
The other extreme would be to do even more in the RRT, more aggressive and destructive transformations for example to make sure the colors converge to white exactly how you think they should. The problem with this second strategy is that the RRT will essentially limit the ability of the user to get a specific look using the LMT under it, so the RRT will force a very specific look that you have very limited ability to counter (although sure by default it would look “good”).

The current version of ACES is sort of a compromise between these two ideologies, the RRT does some stuff to make a viewable image and is not reversible (you’re really not supposed to apply it to the data anyway until delivery though), but it also isn’t so aggressive that you can’t build a custom look under it or even use the inverse RRT when building the LMT to get something close to whatever you want (but yes once the RRT is applied you can’t just remove it, that’s not what I’m saying).

The problem with this “compromise” approach is that it’s susceptible to inevitably pissing everyone off because it’s not clearly one way or the other, the extremes of both camps will never be happy in this scenario. It’s pretty clear that you’re firmly in the second camp and want beautiful images from the RRT by default. Many others are much more in the first camp, they want a completely reversible RRT for a number of reasons, and that would mean taking out destructive things (like pretty desaturation of the highlights), and the default results would look even worse until you do significant “look” work (and you would have to know even more exactly what you’re doing in the grade).

I’m sure everyone working on the ACES spec is well aware of the challenge it is to make a standard to be adopted by so many elitest color assholes, and I’m sure they have reconsidered a lot over the last decade in the countless threads and arguments with people like yourself, and I look forward to whatever improvements they decide for 2.0. But they are not clueless morons who randomly generated a color system. Like I said they made intentional compromises, whether you understand or agree with those comprises or not. ACES is a valid way of working if you know what you’re doing and what it’s doing. It is not “ideal”, it is a practical standard. Studios with their own color teams that work entirely internally can do whatever they want, this isn’t really for them, ACES is a middle-ground for everyone else, and for interchange and archival purposes.


If something is shit, I don’t think it requires a rebuke. It’s just shit. Don’t use it.

They are beyond the 1931 locus. They are meaningless gobblymock. That amounts to additional “distance” to cover, without any meaningful representation. Given that there’s no bearing between stimulus based light transport versus a spectral based transport, it doesn’t seem that there’s any gain whatsoever here; it is apples to oranges. No comparison, and no gain. Only deeper problems.

Except again the display can only display what it can. Given that wider footprints are a huge problem that result in device dependency, and that those more wide stimulus can’t be represented, what exactly works well here?

Again to be clear — it does not manage ■■■■ all. It really doesn’t. This is an often overlooked point.

To be very clear:

  1. It does not manage stimulus.
  2. It does not manage observer sensation.

Both of those sides encompass “colour” as we know it, and it does neither. So again, what exactly does it do? Answer: Nothing.

I’ll leave it at that.

I completely understand the desire for such a system that were to manage colour. I really do. ACES simply isn’t it. It’s a pure pile of horse shit peddled by studios trying to cut actual skilled image crafters out of the cost scheme.

It does not manage colour. It does not provide “consistency”. And worse, it does imbue all work crafted under it an anancronistic digital gaudy RGB look.

It’s the classic Emperor Ain’t Wearing No Clothes.

1 Like

It “manages” it at much as any other existing color system. Once again I’m asking what are you actually suggesting? Do you want 32 channel spectral exr files? They’ll be 100MB+ and no existing software will take them. Short of something like that every color system will make compromises. Fortunately those “compromises” don’t usually even make a difference for displaying must things for human tri-stimulus vision, the eye will not know the difference (besides some shifts in response depending on brightness). If you want to spectrally manipulate something then maybe, but that is an extremely niche requirement that is not inherent in making a good image. At the very most extreme case maybe if you need to perfectly emulate certain film spectral responses, but those are also pretty arbitrary to whatever chemicals happen to make up a certain film stock. I honestly don’t get your point for any real use case.

The AP1 primaries are essentially on the locus (with the exception of the G, which is just a bit further to encompass more). So yes it’s not perfect, no it’s not useless or meaningless, it is a compromise so that edge sits on the edge of the locus and holds more of the spectral response while maintaining three values. Perhaps they should have stuck with rec2020, but yet again they made a practical compromise that should be ok in most situations and the RGB interpretation in an RGB renderer should work as expected, nothing should be input out of the spectrum locus anyway. If it bothers you then treat your render space as rec2020, idk what to tell you, you shouldn’t be touching fully saturated values regardless.

1 Like

I don’t think this sounds right. I don’t know much but I don’t think spectral rendering is simply “adding more primaries”, there are solid spectral rendering algorithms like hero wavelength sampling.

Actually after you feed the RGB data to the software, it will convert it to spectral data before sending it to render. This step is called Spectral Reconstruction. There are so many ways to do it as well, because different wavelength mixtures can sometimes look like the same color, so a set of 1931 XYZ values can also be converted to different wavelength mixtures as well (at least this is my current understanding) so there are actually many ways to do it.


Many renderers that are “spectral” still define a number of primaries, and in most case it’s not very many since it’s generally not worth the overhead. Some techniques automatically assume the spectral characteristics in the shader, for example treating more saturated colors as gaussian wavelength spikes and less saturated colors as broader curves. Regardless spectral renderers still generally output RGB images, the integration and spectral response is handled by the renderer, so it means nothing to ACES or any subsequent color processing.

1 Like

No. It does not.

“Colour” can be broken down into two classes:

  1. Observer based stimulus.
  2. Observer based sensation / appearance.

To “manage” “colour”, one of those must be managed.

It manages neither.

And given “other existing color systems” such as graphic design oriented ICC in fact do attempt this, ACES is well behind. Don’t believe me? Watch the VWG presentations for output transforms.

Check again.

But more importantly, ask yourself what this non-existent stimulus represents. Answer: Nothing. So the problem here is arriving at meaning, because even though no display can emit that stimulus, it is compounded by the fact that it then becomes an exercise in creating a meaningful and consistent output. ACES answer here? Currently just clip. That’s device dependency across sRGB, to DCI-P3, to you name it. Arbitrary output everywhere.

Secondly, the “gamut” mapping approach is to distort. So any conventional meaning there derived from a camera fitted observer stimulus matrix is now, totally distorted toward the digital compliments again. And now it’s baked deeply into your imagery. Forever. Oh… and it doesn’t actually fix the root problem at hand because it was a potentially misguided attempt.

Not sure where you heard this, but it’s rubbish.

Imagine someone has top dollar oil paints that reach out to the locus in terms of stimulus representations. Now imagine telling a painter to not touch them. It’s absolute ahistorical rubbish.

Image crafters use the medium. And they should damn well be permitted to use the entire range of the medium as they see fit.

But with that said, again, ACES is far more broken than that, even with silly constraints.

Have you tried a spectral rendering system? One brilliant person here managed to achieve it with Cycles, and that actually grew into an actual spectral version that is very close in performance to the stimulus based model. And yes, ACES still stinks on spectral-like content.

I don’t know what in the world you’re trying to say and you still haven’t given a real counter example for displays. It’s starting to feel like you just don’t know what you’re talking about. CIE 1931 is literally observer based, the chromaticity chart derives from humans matching pure wavelengths, so yes it is correlated to the LMS responses, it’s why there are no 3 perfect primaries, etc. I shouldn’t have to explain that if you know what you’re talking about.

Observer based sensation and appearance is much more complicated for a number of reasons and I’ve never heard of a color space attempt to deal with that, color is relative, the human mind sees color relative to colors around it. Maybe there’s a case to be made for accounting for the overall brightness of projection since the LMS responses differs in darker environments, but that’s a pretty niche thing that won’t even come up in standard display ranges.

So in one breath you make it sound like the goal is to add complexity to make a perfect representation of reality on a scientific level (which isn’t even desirable or being asked for by pretty much any filmmaker), then on the other hand ask for these filmic responses that are completely removed from how humans see color and almost entirely arbitrary to the chemicals used, it’s the complete opposite direction. It’s almost like you don’t actually know what you want but like to complain.

Values outside of the chromaticity diagram do not mean “nothing”, it’s more they do not represent a physical wavelength or combination of wavelengths that could give a response, and the reason we can’t see that AP1 value is because the LMS curves actually overlap. So values outside are more like theoretical values as if the curves didn’t overlap. I do agree for the sake of rendering I would prefer interpreting the colors as on the locus, like BT2020, and I originally thought AP1 was analogous to that, but I guess ACES wanted to try to get a bit of extra range in that area (probably in an effort to future proof a bit for displays with different primaries). That being said in actual practice it should not make a huge difference, RGB renderers are already further from reality in other ways, but I’m open to proof otherwise.

This assertion is just funny to me because it’s so easy to disprove. Pretty much every person with a computer is viewing everything through sRGB, which has a much smaller gamut, and somehow even the most saturated values in sRGB are further than you almost ever see in reality. Rec2020 and AP1 are both much wider, literally values you’ll only ever see as lasers and will probably never see in your life, and even if you have to represent them they would not be represented pure.

Yeah and you’ll notice the results look almost identical to RGB in almost every non-contrived case. Personally I would love everything to be spectral, certain things definitely do benefit from spectral representation, but it’s on the renderer to figure that out. It’s something I was similarly passionate about ten years ago until I did actual tests. No one in their right mind is realistically asking color management itself to go spectral, it’s way way outside of ACES jurisdiction to do that. Yet again it seems you just don’t get the purpose of ACES as a standard for actual existing production, there’s no use talking more about it.

1 Like

Management means manage / control.

Any system that claims to be a “management” system of “colour” must manage colour.

Colour is defined under two definitions by the CIE. One can loosely be summarized as a stimulus based specification, and the other via sensation / appearance.

What this means is that in ACES, if you specify the stimulus specification via the CIE model for say, a ColorChecker 24, what you get out is not a match for stimulus. And it is also not a match for appearance. It does not manage colour.

As for displays, they too are typically anchored in the stimulus model. That means that what you put into ACES doesn’t come out. Again, it isn’t a management system. And worse, that stimulus is mangled up differently per output device.

See also ICC v2 and ICC V4 as ancient examples of colour management systems that do just that.

Ahistorical nonsense. I challenge you to look up some of the canonized names in colour science with papers. You’ll find that a huge number list “Kodak” under them. There’s a reason for that; film was engineered entirely around appearance and said research. Even the most fundamental basic rate of change was engineered.

Bzzt. Wrong answer.

The CIE XYZ specification is literally an affine transformation away from LMS cone response, and as any good colour science peep knows, negative stimulus is nonsense.

That spectral locus edge is literally the edge of the standard observer model with respect to the purest of wavelengths. The values “beyond” are merely a mathematical byproduct. They are completely nonsense with respect to standard observer stimulus.

Are you guessing or do you want to know why that poor decision was made?

Things with wider than sRGB chroma representations…

  • MacBook Pros since 2015
  • iPhones since 2016
  • iPads since 2016
  • Many Android phones since revision 9.
  • Creative colour film since the Wizard of Oz.

Also, not quite sure what sRGB has to do with things because again, trying to be clear, ACES doesn’t manage stimulus, so it’s random output.

Anyways for folks who don’t really care about gamut voids and have no real idea what they are looking at, go nuts… use ACES. For folks who are actually keen on forward looking solutions, use TCAM and Baselight.


It does not manage SPECTRAL color. For the one millionth time the spectral makeup of a display system is completely out of bounds of the ACES spec. It manages color to the same extent as any other color space based on XYZ, in other words color spaces designed for displays and additive projection, which is literally the point of ACES.

I have absolutely no idea why you are criticizing ACES for literally being designed for something different than you want. It’s akin to complaining that websites don’t use exr instead of jpeg, it’s like you don’t even understand the point of the specification and its role in an actual production pipeline.

ICC literally does almost the exact same steps as ACES, with ICC there’s a device mapping into XYZ, ACES is also based on having device mappings (IDT) into XYZ coordinates. It’s still not entirely clear what you’re trying to say, neither of them are spectral or clearly more “stimulus based” they’re both based on CIE, and it is not a counter example to my point.

I don’t disagree with that, I disagree with the idea that film has anything to do with the human visual system. You’re literally getting into the nitty gritty of human tri-stimulus response. Film is not an accurate representation of human vision, you get all types of crazy shifts in color depending on the film stock used and how it’s developed, it’s completely counter to the accuracy you keep describing as being so important.

Did I ever say differently? That’s the point. Negative values come from the overlap in LMS responses being compensated in the CIE 1931 color matching experiments. If there were no overlap there would be no need for negative values and the weird chromaticity shape from normalized values. If we had a physical way to trigger M cones in isolation the color would be seen as a green more intense than any physical wavelength. We can sort of emulate this effect with cone fatigue, staring at a bright magenta color and having the green ghostly after image. But honestly who cares, that’s not the actual point.

We use imaginary color primaries all the time to encompass the whole spectrum locus. It’s literally at the heart of most of our color spaces and XYZ which is the basis of pretty much all our color science.

The actual point I was trying to make is that in most of the stuff ACES is designed for it doesn’t really matter if there’s a physical representation of the primaries. The main reason you might want a physical representation is in something like rendering, and I AGREED with that. If we want to pretend RGB rendering is modeling the real world, I agree the least we can do is keep the primaries either in or on the locus. And points on the locus are much less ambiguous and can be treated as specific wavelengths and informally are “specified” (you should be overjoyed). So if you have an unusual obsession with the spectral characteristics of a color space, if the primaries are on the locus it’s probably a safe bet you can figure out three exact wavelengths. My point with rendering is that in practice it doesn’t really matter, RGB is still not how reality works, and concerns like that are outside of ACES jurisdiction to care about anyway.

So after all of this unnecessary back and forth all I’m really getting is you don’t understand the point of ACES and you want it to be something that it’s not designed to be.
So moving on from that, yes Blender should support an ACES default in some form. That is all.


Reread what I wrote. It doesn’t manage colour, in either definition sense.

You are conflating negative additive light experiments with meaningless stimulus coordinates.

Again, they hold no meaning with respect to stimulus which means they cannot be represented in any display, because they are meaningless.

And again, ACES does not manage colour in either CIE definition of the term. Read that again carefully; a stimulus coordinate never makes it out of the working RGB model, and the output corresponds with neither stimulus nor colour appearance. It’s meaningless garbage because of the fundamental mechanic of per channel curves.

Again, read back and understand the entire point I’ve been making over and over and over and over again:

  1. Blender still isn’t properly managed.
  2. ACES is overly complex and if someone wants to try it, go nuts. Change the config. Adding a config detail is what leads to the existing garbage that is ancient and out of touch, such as the existing ACES reference in the configuration.
  3. ACES doesn’t work, and never has. It is not a management system.
  4. Congruent with 3., it also makes all imagery look like a digital gaudy mess.
  5. It leads to gamut voids which have a colossal impact on albedos, etc.

It’s really a combination of things coming from the experience of having to push along colour management in Blender for literally over a decade.

Blender isn’t entirely ready, and ACES doesn’t work, and ACES leads to hideous work, by default.

I can’t fight windbags. I can attempt to explain things. That’s all I can do.

Nothing is stopping anyone from ■■■■■■■ using ACES. It doesn’t work within Blender quite yet, nor does it even work at all. But hey… have fun. I’ll just make a clear case that it’s a ■■■■■■■ horrible idea in Blender as a default. Absolutely. ■■■■■■■. Horrible.


Well then neither does ICC or any other color system I’m aware of. Still waiting for that alternative example.

The results of CIE 1931 are entirely a byproduct of stimulus response. I still have no idea what you’re even referring to, what stimulus color space are they supposed to use if not XYZ? The CIE experiments are based on matching wavelengths to RGB combinations, and when a match cannot be reached the pure wavelength is compensated until a match is reached (negative values). This is entirely because of cone response curves. The reason the chromaticity diagram looks the way it does is because of the cone responses, it’s not just a fun coincidence of some random experiment. XYZ is effectively the closest descriptor we commonly use to a stimulus mapping.

So with that said, yes, my points still stand. I don’t know how much more “managed” color is supposed to be, and I’m not sure that you even know or can explain what that even looks like. So unless you can clarify that there’s not much more to say and your criticisms really have no valid basis or real world alternative.


@troy_s If people are consistently misunderstanding what you’re saying, then it’s your fault for not communicating properly.

And quite frankly, even if everything you say is correct, your attitude and the way you talk makes me really not want to listen to anything you say, ever.

You do an absolutely horrible job of convincing other people of your points, you don’t answer people’s questions, you never explain anything, you just constantly insist “I’m right, you’re wrong, ACES sucks, but I won’t give any details why”. That’s not the right way to make arguments.

If you want other people to listen to you, you need to learn how to make good arguments, based on evidence and logic, actually explaining things, backing up your claims with proof, not just insisting over and over that you’re right. Yes it takes more time, yes it’s more effort, but it’s necessary.


From my perspective he did explain a lot, he just didn’t link any technical data.

I don’t have the technical knowledge of the colour management system to argue differently. It’s kind of over my head.

But all of this helped me to understand the subject generally.

Now to blender, it has it’s glaring issues, some systems get more attention than others which is why it’s so segmented with implemented features and fixes etc.

If we compare Davinci with Blender with the basic’s with colour management we can see where Blender is lacking.

So my side question is Does Davinci use ACES in any form ?
If not why ?
What are the standards for colour management, and/or what does the industry use and why ?

I’m sure that these questions answered would shine a bit of light where we should go on from there. If ACES is just a blind alley with magical promises, or if there’s a system that fills this role in a better way


This whole discussion is fascinating to read. I don’t know enough about colorspaces and colormanagement to know who it ‘right’. I suspect both sides make their fair point.

But it looks like two people speaking a completely different language to eachother and both thinking they’re speaking the same language.

I did learn a lot more about colorspaces form this discussion, so to me it’s a win. :smiley: and @s_troy could stand to come off his high horse a bit. ACES can be a bad standard, but it’s a standard nonetheless. If lots of people use something you have to accept that they think it useful, however ridiculous it may be to you. I still think you make a lot of good points, don’t get me wrong. And fighting for something better than ACES is probably a worthy fight. And if the plan was to make ACES the blender default then I could understand your apparent anger. But that’s not the case. People just want to have it as an (edit:) ‘easy to select’ option.

edit: oops, this wasn’t really meant as a reply to @Dragosh. But I agree with you completely :wink:

edit2: I understand you can already use ACES with blender now if you put the right config files somewhere or fiddle with environment variables or somesuch magic incantation. That’s maybe fine, but most people want just a dropdown.