Do you mind explaining the steps you take in Blender and in Resolve for your ACES workflow?
Sure, although currently I mostly use other applications for the final renders, but in the future I would like to keep it all in Blender (it would simplify so much!).
If an ACES OCIO config is set in your environment variables then Blender will use it. Make sure your textures are set to the correct color spaces (something like srgb_texrure for color or raw for data) and render an exr (the ACES convention is 16-bit half-float RGB). There shouldnât really be much more to it, it should be like working in filmic currently just more standardized across applications.
Bring it into Resolve, set the clip input color space (probably acescg) and make sure Resolve is set to use ACES in the project settings (generally acescct to work in and output sRGB). Then just do what you normally do in Resolve to get the final look youâre after. You could even export your look as a lut or something and apply it as a look in your config file, so you can preview even closer to the final result in Blender (can even use variables in the config if you want to set per-shot looks, which is common in real productions that get looks from set, although Iâm not currently doing it). End of the day whatâs nice is all of it is really just for your viewport experience and getting the best preview to work with thatâs consistent across applications (so you can properly light and color things and know what youâll actually get in the end), the exr you render is still the same regardless.
So in summary it shouldnât be much more complicated than rendering exrs like you normally would, and the viewport will display with the ACES RRT applied so you can work knowing how it will look in Resolve also. A good simplified config file that comes with Blender and is designed with names Blender uses would be ideal, but right now everyone is either doing their own thing or using the full default one which is just awful to work with. Having to go through hundreds of textures and babysit their color spaces any time the config changes is not ideal, and itâs even worse when you canât actually see all the color space selections on the screen and you have duplicates with ambiguous madness (the generated ACES config REALLY should not be the default config users are expected to use, hence me coming here and requesting Blender be the one to update their default config to add an ACES alternative to filmic that works seamlessly).
The current vfx reference platform is OCIO 2.0 and ACES 1.2 with 2.1 and 1.3 respectively for next year, so things are moving along and getting ironed out. Like I said Iâm pretty sure theyâre including built-in ACES transforms in OCIO now, so you really just need a good config file. Other renderers like Redshift and Octane have already incorporated an ACES default with OCIO 2.0 and the Redshift config is already using the built-in ones.
Just to make things clear ACES is not trying to be some perfect color space, thatâs not the point. The point of ACES is to be a STANDARD color pipeline, and with that comes compromises. Those compromises are necessary to keep ACES as simple and transparent as possible while covering all the major bases.
We could argue all day of what an ideal color pipeline looks like (and based on the length of this thread it looks like people have), but thatâs missing the point. ACES was designed with its compromises in mind, they are intentional. Sure storing 16-bit half-float linear RGB is not the most efficient way to store color, but it is unambiguous and ubiquitous. Sure they could use something more efficient like YUV, some integer encoded log, but those decisions would add complexity and ambiguity. 16-bit in theory has plenty of precision as long as youâre not abusing your values in the grade or using finicky luts, itâs 10-bit with a 5-bit exponent.
The RRT is not meant to be your final âlookâ (unless you want it to be). Itâs more like having default safety rails to make sure youâre not viewing clipped values and always working under some standard film curve. Typically I go for a film emulation type look, and I would make sure it looks the same with or without ACES, but I still work under ACES and the RRT rather than completely going off and doing my own thing because working under a standard is generally better. Even without the look applied youâre at least in the ballpark of something you can work with and you have the input and output benefits of the ACES standard being standard.
This is completely false.
It has everything to do with not being a management system. It doesnât work.
This is also completely bunko.
The AP1 primaries are a stimulus based specification, and literally are bogus non-existent rubbish stimulus values that have no basis in real radiometric electromagnetic radiation. RGB rendering is stimulus rendering to begin with, and will always attenuate vastly differently to a closer-to-spectral model. Thereâs simply no comparison.
And the problems with ACES are far greater than simple gamut concerns. The very mechanics are broken, and again, it isnât a colour management system as a result.
This is more rubbish.
The fact that the working stimulus space is vastly larger than the destination is indeed a solid chunk of the problem, compounded and made worse by per channel lookups that distort the stimulus entirely.
While important, itâs only a portion of the broken output that ACES delivers.
It is saying that the creative film response informed a century of subtractive based image making, building atop of thousands of years of subtractive based painting. If one doesnât quite understand the difference between additive stimulus projection and the mechanic that forms the image versus the subtractive model, it is worth looking into.
Nope. Plenty of the dozens of problems with ACES were accidents. Follow the history.
Worse, it imbues work with a rather hideous digital RGB based aesthetic that is entirely unavoidable without doing as the majority of things that âuseâ it do; invert the output transform in an attempt to negate it.
For more information, itâs worth reading Chris Brejonâs piece. It specifically covers how the number of productions that have used ACES is completely erroneous. Further, some studios mandate an ACES interchange, and as such, great effort has been enforced to work around it.
I mean it DOES work and plenty of high end studios use it or variants of something similar to it. I would be curious what exactly âdoesnât workâ.
The AP1 primaries are at the edge of the color locus which represents the response to pure wavelengths, so yes the AP1 primaries can be represented as pure wavelengths and no other combinations of wavelengths will give the same result so effectively thatâs what they are.
A pure spectrally based color science is currently completely infeasible and is pretty pointless to talk about, and would make no difference (at least for representation of final images) for petty much every existing display type. For the rendering itself, sure, there is a case to be made for working spectrally. But there are currently no standards for how to handle that and itâs well outside the scope of ACES to define what that kind of change would look like. ACES is primarily a pipeline for interchange with existing software and color science, defining it spectrally would break compatibility with pretty much every software and require completely new ways of representing color digitally (almost all software, including renderers are RGB, and even when they represent more wavelengths internally theyâre still mostly taking RGB input, interpreting that, and creating an RGB output).
CG work is also not the bulk concern of ACES. CG work is an optional part of the film pipeline, and the bulk of production would benefit even less from being spectral while making everything much more complex. Like Iâve said ACES is an effective compromise that is meant to be practical given the current state of things, not some âperfectâ ideal.
You only very rarely encounter colors outside of sRGB in the real world, mostly coming from light sources with narrow bands of wavelengths, which tend to be rare unless youâre working with lasers for some reason. Even my example with LEDs is unlikely because even LEDs arenât super narrow wavelengths. So while AP1 is indeed larger, in practice you should not be working anywhere near the extremes in the first place, and if you are thatâs on you. But if for some niche reason you have to, some basic gamut compression should be enough in most cases, and I even think new ACES versions are doing some of that. If you have an actual example (done properly) that counters that Iâd be curious to see it.
Iâm already familiar with the Chris Brejon piece and it still does not discredit the value of ACES. He has an insistence on how colors resolve to white, but that also requires more complex color transformations. The RRT is not to everyoneâs liking, but yes if you want a very specific look then go ahead and do it with the inverse RRT, no one is stopping you. Or if youâre really that strongly opposed then use something else for final output, not everyone is going to be happy.
But for the most part standard criticisms about the RRT (like the contrast and shoulder) can be resolved in the LMT. The RRT is not the end all be all look, itâs more like a safety and standard and in theory should have minimal âlookâ (obviously people will disagree on that). The look itself is up the the artist but that doesnât remove the fact that there has to be something to compress the wide range of scene values, and the RRT has plenty of reasons behind it. We could argue about the RRT all day, thereâs a level of subjectivity to it, but itâs not really meant to be your final look anyway and hyper focusing on the look of it is sort of missing the point of ACES.
It really doesnât.
As in it ensures no consistency at all across devices. What exactly is the point again?
And again, it must be noted, that of the productions listed, many TDs have clearly stated that the output transform side is never used. Not rarely, but essentially as small a number to be insignificant error.
They all try to invert, which is ultimately impossible.
This is also why Filmlight, and several larger post houses are in the development of an attempted 2.0; it doesnât work and they want to escape from under the errors, but due to higher level studio insistences on âan ACESâ chain, they cannot.
It adds a specific digital gaudy look and it isnât invertible to negate the influences of the fundamental mechanic.
This is unequivocally false. They are actually beyond the stimulus of pure spectral stimulus under 1931.
Further still, it is a stimulus encoding. Thereâs no real relationship to actual light transport. Itâs a hack. So further discussions are moot. Is RGB stimulus light transport models good enough? Sure in some contexts. But suggesting that somehow AP1 does some magic is false.
Worse, because of the extreme negative and low luminance, it ultimately leads to less colourful appearing renderings. This is due to the distortions and gravity of the primaries under indirect bounces, which push outward to the bogus primaries, and to extreme low âbrightnessâ. The general result in terms of sensation from that stimulus is less colourful.
Also false. Spectral rendering is actually a real thing that even a few folks here have managed to pull off, and the energy attenuation is absolutely striking.
Again not quite on point.
It is simply another knocked off per channel curve. Seriously nothing more. And it comes with all of that broken baggage, plus many other problems.
You are surely trying to kid here? You realize that film had a wider gamut than sRGB? Thatâs a hundred years of creative image work.
At any rate, constricting a gamut to some smaller range of stimulus to work around glaring faults in a an asstastic protocol is up to whoever wants to choose it. Go nuts.
A few points:
- Literally every stimulus mixture that cannot be represented at the display or output medium becomes device dependent. This is the antithesis of colour management, even in the loosest and most silly of definitions.
- The basic mechanic of per channel causes stimulus mixtures to collapse to digital primaries and compliments; the entire nuance of the range of values becomes distorted, and the imagery ends up like a preschooler twiddling knobs.
- The gamut volume / height is a catastrophe and cannot be negotiated with any look. Full stop.
Many, many other issues like this plague it. Itâs absolute crap being rammed down image maker throats by a few studios to save a few bucks.
You should reach out to him and ask him if he uses it on his projects or if he would willingly do so.
It literally cannot be undone. It saddles every image maker with some garbage residue.
And again, false.
Worse, it is stuck in the open domain, and plenty of creative choices do not belong there.
You are deliberately misinterpreting things I say and missing the point. You also never bring up any better alternatives or standards, so I donât get what your point is or practical proof of claims.
I never said AP1 wasnât based on chromaticity, what I said was that the AP1 primaries are roughly on the edge of the color locus which corresponds to the response to pure wavelengths. They are designed to be very close to the edge of the chromaticity diagram (with the green and red edge running up along the side) and are very similar to the rec2020 primaries. Because of the way the tri-stimulus response works the wavelength values at the edge of the locus are actually not so ambiguous, if you wanted to interpret them in your renderer you could treat primaries on the outer locus as wavelengths.
Itâs honestly a very minor point I was making to say AP1 should be wide enough for any RGB display while also roughly corresponding to wavelengths for rendering, if youâre so inclined (not that RGB rendering is actually that accurate, but we make many many more compromises in rendering anyway). I donât know what other better color spaces you know that are instead spectrally defined with more than 3 primaries.
Very few production renderers are spectral, and even when they are itâs usually pretty limited and mostly handled internally. I believe Octane has 6 primaries internally, pretty sure Maxwell has 12, something like Lux or Indigo is not commonly used in production, and of the big houses I only know that Wetaâs renderer is spectral, but the most common renderers by far are RGB. Even when renderers are spectral it doesnât mean theyâre actually being fed good spectral data, the data they get is almost always RGB and the texture pipeline isnât about to change to spectral anytime soon. But yes in an ideal world everything would be spectral, but thatâs completely beyond the scope of the conversation of ACES.
The main issue with ACES as opposed to other REAL systems of color management is that it has to make compromises on certain things to be a standard for the widest range of productions/houses.
As I see it there are two ways things could have gone
-the first is to favor the âlookâ being almost entirely in the LMT and keeping the RRT much more basic (and ideally even reversible). The problem with this first strategy is that without ACES would look horrendous by default in this scenario without an LMT. It would put all the pressure on the LMT to make a viewable image, and since the LMT is supposed to be modifiable it also canât be made standard, and it would essentially boil down to the return to the wild west of luts like before.
The other extreme would be to do even more in the RRT, more aggressive and destructive transformations for example to make sure the colors converge to white exactly how you think they should. The problem with this second strategy is that the RRT will essentially limit the ability of the user to get a specific look using the LMT under it, so the RRT will force a very specific look that you have very limited ability to counter (although sure by default it would look âgoodâ).
The current version of ACES is sort of a compromise between these two ideologies, the RRT does some stuff to make a viewable image and is not reversible (youâre really not supposed to apply it to the data anyway until delivery though), but it also isnât so aggressive that you canât build a custom look under it or even use the inverse RRT when building the LMT to get something close to whatever you want (but yes once the RRT is applied you canât just remove it, thatâs not what Iâm saying).
The problem with this âcompromiseâ approach is that itâs susceptible to inevitably pissing everyone off because itâs not clearly one way or the other, the extremes of both camps will never be happy in this scenario. Itâs pretty clear that youâre firmly in the second camp and want beautiful images from the RRT by default. Many others are much more in the first camp, they want a completely reversible RRT for a number of reasons, and that would mean taking out destructive things (like pretty desaturation of the highlights), and the default results would look even worse until you do significant âlookâ work (and you would have to know even more exactly what youâre doing in the grade).
Iâm sure everyone working on the ACES spec is well aware of the challenge it is to make a standard to be adopted by so many elitest color assholes, and Iâm sure they have reconsidered a lot over the last decade in the countless threads and arguments with people like yourself, and I look forward to whatever improvements they decide for 2.0. But they are not clueless morons who randomly generated a color system. Like I said they made intentional compromises, whether you understand or agree with those comprises or not. ACES is a valid way of working if you know what youâre doing and what itâs doing. It is not âidealâ, it is a practical standard. Studios with their own color teams that work entirely internally can do whatever they want, this isnât really for them, ACES is a middle-ground for everyone else, and for interchange and archival purposes.
If something is shit, I donât think it requires a rebuke. Itâs just shit. Donât use it.
They are beyond the 1931 locus. They are meaningless gobblymock. That amounts to additional âdistanceâ to cover, without any meaningful representation. Given that thereâs no bearing between stimulus based light transport versus a spectral based transport, it doesnât seem that thereâs any gain whatsoever here; it is apples to oranges. No comparison, and no gain. Only deeper problems.
Except again the display can only display what it can. Given that wider footprints are a huge problem that result in device dependency, and that those more wide stimulus canât be represented, what exactly works well here?
Again to be clear â it does not manage â â â â all. It really doesnât. This is an often overlooked point.
To be very clear:
- It does not manage stimulus.
- It does not manage observer sensation.
Both of those sides encompass âcolourâ as we know it, and it does neither. So again, what exactly does it do? Answer: Nothing.
Iâll leave it at that.
I completely understand the desire for such a system that were to manage colour. I really do. ACES simply isnât it. Itâs a pure pile of horse shit peddled by studios trying to cut actual skilled image crafters out of the cost scheme.
It does not manage colour. It does not provide âconsistencyâ. And worse, it does imbue all work crafted under it an anancronistic digital gaudy RGB look.
Itâs the classic Emperor Ainât Wearing No Clothes.
It âmanagesâ it at much as any other existing color system. Once again Iâm asking what are you actually suggesting? Do you want 32 channel spectral exr files? Theyâll be 100MB+ and no existing software will take them. Short of something like that every color system will make compromises. Fortunately those âcompromisesâ donât usually even make a difference for displaying must things for human tri-stimulus vision, the eye will not know the difference (besides some shifts in response depending on brightness). If you want to spectrally manipulate something then maybe, but that is an extremely niche requirement that is not inherent in making a good image. At the very most extreme case maybe if you need to perfectly emulate certain film spectral responses, but those are also pretty arbitrary to whatever chemicals happen to make up a certain film stock. I honestly donât get your point for any real use case.
The AP1 primaries are essentially on the locus (with the exception of the G, which is just a bit further to encompass more). So yes itâs not perfect, no itâs not useless or meaningless, it is a compromise so that edge sits on the edge of the locus and holds more of the spectral response while maintaining three values. Perhaps they should have stuck with rec2020, but yet again they made a practical compromise that should be ok in most situations and the RGB interpretation in an RGB renderer should work as expected, nothing should be input out of the spectrum locus anyway. If it bothers you then treat your render space as rec2020, idk what to tell you, you shouldnât be touching fully saturated values regardless.
I donât think this sounds right. I donât know much but I donât think spectral rendering is simply âadding more primariesâ, there are solid spectral rendering algorithms like hero wavelength sampling.
Actually after you feed the RGB data to the software, it will convert it to spectral data before sending it to render. This step is called Spectral Reconstruction. There are so many ways to do it as well, because different wavelength mixtures can sometimes look like the same color, so a set of 1931 XYZ values can also be converted to different wavelength mixtures as well (at least this is my current understanding) so there are actually many ways to do it.
Many renderers that are âspectralâ still define a number of primaries, and in most case itâs not very many since itâs generally not worth the overhead. Some techniques automatically assume the spectral characteristics in the shader, for example treating more saturated colors as gaussian wavelength spikes and less saturated colors as broader curves. Regardless spectral renderers still generally output RGB images, the integration and spectral response is handled by the renderer, so it means nothing to ACES or any subsequent color processing.
No. It does not.
âColourâ can be broken down into two classes:
- Observer based stimulus.
- Observer based sensation / appearance.
To âmanageâ âcolourâ, one of those must be managed.
It manages neither.
And given âother existing color systemsâ such as graphic design oriented ICC in fact do attempt this, ACES is well behind. Donât believe me? Watch the VWG presentations for output transforms.
Check again.
But more importantly, ask yourself what this non-existent stimulus represents. Answer: Nothing. So the problem here is arriving at meaning, because even though no display can emit that stimulus, it is compounded by the fact that it then becomes an exercise in creating a meaningful and consistent output. ACES answer here? Currently just clip. Thatâs device dependency across sRGB, to DCI-P3, to you name it. Arbitrary output everywhere.
Secondly, the âgamutâ mapping approach is to distort. So any conventional meaning there derived from a camera fitted observer stimulus matrix is now, totally distorted toward the digital compliments again. And now itâs baked deeply into your imagery. Forever. Oh⌠and it doesnât actually fix the root problem at hand because it was a potentially misguided attempt.
Not sure where you heard this, but itâs rubbish.
Imagine someone has top dollar oil paints that reach out to the locus in terms of stimulus representations. Now imagine telling a painter to not touch them. Itâs absolute ahistorical rubbish.
Image crafters use the medium. And they should damn well be permitted to use the entire range of the medium as they see fit.
But with that said, again, ACES is far more broken than that, even with silly constraints.
Have you tried a spectral rendering system? One brilliant person here managed to achieve it with Cycles, and that actually grew into an actual spectral version that is very close in performance to the stimulus based model. And yes, ACES still stinks on spectral-like content.
I donât know what in the world youâre trying to say and you still havenât given a real counter example for displays. Itâs starting to feel like you just donât know what youâre talking about. CIE 1931 is literally observer based, the chromaticity chart derives from humans matching pure wavelengths, so yes it is correlated to the LMS responses, itâs why there are no 3 perfect primaries, etc. I shouldnât have to explain that if you know what youâre talking about.
Observer based sensation and appearance is much more complicated for a number of reasons and Iâve never heard of a color space attempt to deal with that, color is relative, the human mind sees color relative to colors around it. Maybe thereâs a case to be made for accounting for the overall brightness of projection since the LMS responses differs in darker environments, but thatâs a pretty niche thing that wonât even come up in standard display ranges.
So in one breath you make it sound like the goal is to add complexity to make a perfect representation of reality on a scientific level (which isnât even desirable or being asked for by pretty much any filmmaker), then on the other hand ask for these filmic responses that are completely removed from how humans see color and almost entirely arbitrary to the chemicals used, itâs the complete opposite direction. Itâs almost like you donât actually know what you want but like to complain.
Values outside of the chromaticity diagram do not mean ânothingâ, itâs more they do not represent a physical wavelength or combination of wavelengths that could give a response, and the reason we canât see that AP1 value is because the LMS curves actually overlap. So values outside are more like theoretical values as if the curves didnât overlap. I do agree for the sake of rendering I would prefer interpreting the colors as on the locus, like BT2020, and I originally thought AP1 was analogous to that, but I guess ACES wanted to try to get a bit of extra range in that area (probably in an effort to future proof a bit for displays with different primaries). That being said in actual practice it should not make a huge difference, RGB renderers are already further from reality in other ways, but Iâm open to proof otherwise.
This assertion is just funny to me because itâs so easy to disprove. Pretty much every person with a computer is viewing everything through sRGB, which has a much smaller gamut, and somehow even the most saturated values in sRGB are further than you almost ever see in reality. Rec2020 and AP1 are both much wider, literally values youâll only ever see as lasers and will probably never see in your life, and even if you have to represent them they would not be represented pure.
Yeah and youâll notice the results look almost identical to RGB in almost every non-contrived case. Personally I would love everything to be spectral, certain things definitely do benefit from spectral representation, but itâs on the renderer to figure that out. Itâs something I was similarly passionate about ten years ago until I did actual tests. No one in their right mind is realistically asking color management itself to go spectral, itâs way way outside of ACES jurisdiction to do that. Yet again it seems you just donât get the purpose of ACES as a standard for actual existing production, thereâs no use talking more about it.
Management means manage / control.
Any system that claims to be a âmanagementâ system of âcolourâ must manage colour.
Colour is defined under two definitions by the CIE. One can loosely be summarized as a stimulus based specification, and the other via sensation / appearance.
What this means is that in ACES, if you specify the stimulus specification via the CIE model for say, a ColorChecker 24, what you get out is not a match for stimulus. And it is also not a match for appearance. It does not manage colour.
As for displays, they too are typically anchored in the stimulus model. That means that what you put into ACES doesnât come out. Again, it isnât a management system. And worse, that stimulus is mangled up differently per output device.
See also ICC v2 and ICC V4 as ancient examples of colour management systems that do just that.
Ahistorical nonsense. I challenge you to look up some of the canonized names in colour science with papers. Youâll find that a huge number list âKodakâ under them. Thereâs a reason for that; film was engineered entirely around appearance and said research. Even the most fundamental basic rate of change was engineered.
Bzzt. Wrong answer.
The CIE XYZ specification is literally an affine transformation away from LMS cone response, and as any good colour science peep knows, negative stimulus is nonsense.
That spectral locus edge is literally the edge of the standard observer model with respect to the purest of wavelengths. The values âbeyondâ are merely a mathematical byproduct. They are completely nonsense with respect to standard observer stimulus.
Are you guessing or do you want to know why that poor decision was made?
Things with wider than sRGB chroma representationsâŚ
- MacBook Pros since 2015
- iPhones since 2016
- iPads since 2016
- Many Android phones since revision 9.
- Creative colour film since the Wizard of Oz.
Also, not quite sure what sRGB has to do with things because again, trying to be clear, ACES doesnât manage stimulus, so itâs random output.
Anyways for folks who donât really care about gamut voids and have no real idea what they are looking at, go nuts⌠use ACES. For folks who are actually keen on forward looking solutions, use TCAM and Baselight.
It does not manage SPECTRAL color. For the one millionth time the spectral makeup of a display system is completely out of bounds of the ACES spec. It manages color to the same extent as any other color space based on XYZ, in other words color spaces designed for displays and additive projection, which is literally the point of ACES.
I have absolutely no idea why you are criticizing ACES for literally being designed for something different than you want. Itâs akin to complaining that websites donât use exr instead of jpeg, itâs like you donât even understand the point of the specification and its role in an actual production pipeline.
ICC literally does almost the exact same steps as ACES, with ICC thereâs a device mapping into XYZ, ACES is also based on having device mappings (IDT) into XYZ coordinates. Itâs still not entirely clear what youâre trying to say, neither of them are spectral or clearly more âstimulus basedâ theyâre both based on CIE, and it is not a counter example to my point.
I donât disagree with that, I disagree with the idea that film has anything to do with the human visual system. Youâre literally getting into the nitty gritty of human tri-stimulus response. Film is not an accurate representation of human vision, you get all types of crazy shifts in color depending on the film stock used and how itâs developed, itâs completely counter to the accuracy you keep describing as being so important.
Did I ever say differently? Thatâs the point. Negative values come from the overlap in LMS responses being compensated in the CIE 1931 color matching experiments. If there were no overlap there would be no need for negative values and the weird chromaticity shape from normalized values. If we had a physical way to trigger M cones in isolation the color would be seen as a green more intense than any physical wavelength. We can sort of emulate this effect with cone fatigue, staring at a bright magenta color and having the green ghostly after image. But honestly who cares, thatâs not the actual point.
We use imaginary color primaries all the time to encompass the whole spectrum locus. Itâs literally at the heart of most of our color spaces and XYZ which is the basis of pretty much all our color science.
The actual point I was trying to make is that in most of the stuff ACES is designed for it doesnât really matter if thereâs a physical representation of the primaries. The main reason you might want a physical representation is in something like rendering, and I AGREED with that. If we want to pretend RGB rendering is modeling the real world, I agree the least we can do is keep the primaries either in or on the locus. And points on the locus are much less ambiguous and can be treated as specific wavelengths and informally are âspecifiedâ (you should be overjoyed). So if you have an unusual obsession with the spectral characteristics of a color space, if the primaries are on the locus itâs probably a safe bet you can figure out three exact wavelengths. My point with rendering is that in practice it doesnât really matter, RGB is still not how reality works, and concerns like that are outside of ACES jurisdiction to care about anyway.
So after all of this unnecessary back and forth all Iâm really getting is you donât understand the point of ACES and you want it to be something that itâs not designed to be.
So moving on from that, yes Blender should support an ACES default in some form. That is all.
Reread what I wrote. It doesnât manage colour, in either definition sense.
You are conflating negative additive light experiments with meaningless stimulus coordinates.
Again, they hold no meaning with respect to stimulus which means they cannot be represented in any display, because they are meaningless.
And again, ACES does not manage colour in either CIE definition of the term. Read that again carefully; a stimulus coordinate never makes it out of the working RGB model, and the output corresponds with neither stimulus nor colour appearance. Itâs meaningless garbage because of the fundamental mechanic of per channel curves.
Again, read back and understand the entire point Iâve been making over and over and over and over again:
- Blender still isnât properly managed.
- ACES is overly complex and if someone wants to try it, go nuts. Change the config. Adding a config detail is what leads to the existing garbage that is ancient and out of touch, such as the existing ACES reference in the configuration.
- ACES doesnât work, and never has. It is not a management system.
- Congruent with 3., it also makes all imagery look like a digital gaudy mess.
- It leads to gamut voids which have a colossal impact on albedos, etc.
Itâs really a combination of things coming from the experience of having to push along colour management in Blender for literally over a decade.
Blender isnât entirely ready, and ACES doesnât work, and ACES leads to hideous work, by default.
I canât fight windbags. I can attempt to explain things. Thatâs all I can do.
Nothing is stopping anyone from â â â â â â â using ACES. It doesnât work within Blender quite yet, nor does it even work at all. But hey⌠have fun. Iâll just make a clear case that itâs a â â â â â â â horrible idea in Blender as a default. Absolutely. â â â â â â â . Horrible.
Well then neither does ICC or any other color system Iâm aware of. Still waiting for that alternative example.
The results of CIE 1931 are entirely a byproduct of stimulus response. I still have no idea what youâre even referring to, what stimulus color space are they supposed to use if not XYZ? The CIE experiments are based on matching wavelengths to RGB combinations, and when a match cannot be reached the pure wavelength is compensated until a match is reached (negative values). This is entirely because of cone response curves. The reason the chromaticity diagram looks the way it does is because of the cone responses, itâs not just a fun coincidence of some random experiment. XYZ is effectively the closest descriptor we commonly use to a stimulus mapping.
So with that said, yes, my points still stand. I donât know how much more âmanagedâ color is supposed to be, and Iâm not sure that you even know or can explain what that even looks like. So unless you can clarify that thereâs not much more to say and your criticisms really have no valid basis or real world alternative.
@troy_s If people are consistently misunderstanding what youâre saying, then itâs your fault for not communicating properly.
And quite frankly, even if everything you say is correct, your attitude and the way you talk makes me really not want to listen to anything you say, ever.
You do an absolutely horrible job of convincing other people of your points, you donât answer peopleâs questions, you never explain anything, you just constantly insist âIâm right, youâre wrong, ACES sucks, but I wonât give any details whyâ. Thatâs not the right way to make arguments.
If you want other people to listen to you, you need to learn how to make good arguments, based on evidence and logic, actually explaining things, backing up your claims with proof, not just insisting over and over that youâre right. Yes it takes more time, yes itâs more effort, but itâs necessary.
From my perspective he did explain a lot, he just didnât link any technical data.
I donât have the technical knowledge of the colour management system to argue differently. Itâs kind of over my head.
But all of this helped me to understand the subject generally.
Now to blender, it has itâs glaring issues, some systems get more attention than others which is why itâs so segmented with implemented features and fixes etc.
If we compare Davinci with Blender with the basicâs with colour management we can see where Blender is lacking.
So my side question is Does Davinci use ACES in any form ?
If not why ?
What are the standards for colour management, and/or what does the industry use and why ?
Iâm sure that these questions answered would shine a bit of light where we should go on from there. If ACES is just a blind alley with magical promises, or if thereâs a system that fills this role in a better way
This whole discussion is fascinating to read. I donât know enough about colorspaces and colormanagement to know who it ârightâ. I suspect both sides make their fair point.
But it looks like two people speaking a completely different language to eachother and both thinking theyâre speaking the same language.
I did learn a lot more about colorspaces form this discussion, so to me itâs a win. and @s_troy could stand to come off his high horse a bit. ACES can be a bad standard, but itâs a standard nonetheless. If lots of people use something you have to accept that they think it useful, however ridiculous it may be to you. I still think you make a lot of good points, donât get me wrong. And fighting for something better than ACES is probably a worthy fight. And if the plan was to make ACES the blender default then I could understand your apparent anger. But thatâs not the case. People just want to have it as an (edit:) âeasy to selectâ option.
edit: oops, this wasnât really meant as a reply to @Dragosh. But I agree with you completely
edit2: I understand you can already use ACES with blender now if you put the right config files somewhere or fiddle with environment variables or somesuch magic incantation. Thatâs maybe fine, but most people want just a dropdown.