Thoughts on making Cycles into a spectral renderer

It’s fixed AFAIK. And it would try to compress the gamut as the intensity go up, and eventually path to white when it exceed the upper limit. This is not just a log curve, it’s gamut mapping that needs to be engineered.

I am not quite sure, I guess this would be one of those design decision questions that Troy asked, specifically the second one:

If the range is dynamic, the situation would be “brighter and darker shots will look exactly or approximately the same”.

In my opinion, this is against artistic creativity. So probably you wouldn’t want that. Again I am not sure about this.


The only thing I know is hue skew being partly also part of the human eye nonlinear response as well. A lot of classical paintings has this effect as well.

Other than this I failed to understand what everything else is about… I’ll hold on a second until I understood what you guys are on about🤔

What you mean could be a photometry effect.

We are talking about radiometry.brighter light means more photons, means higher lightwave amplitude.The wavelength and frequency stays the same.

you can found this all over the net,like this

yes absolutely, this is physics.

The example bar graph in a previous post shows the sRGB graph clamping at 1.0 causing hue shift (rgb ratio changing), it’s not the same cause, also, this clamping is very similar to how camera works, as they do clamp in this way mostly. If you don’t clamp that it doesnt matter.

You mean this Filmic as default is hurting the experience for people graph?

I have found this paper
Implementation of an HDR Tone Mapping Operator
Based on Human Perception

and this paper

this paper

Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

But this too would be broken.

  1. Camera sensor captures are closer to capturing tristimulus data, but still are subject to clipping.
  2. Tristimulus data is not an image.

TL;DR: Rolling through the spectral sensitivity of a digital camera can only help us match CGI to the mechanism a digital camera captured an equivalent spectral stimulus, but not closer to forming an image. While useful, emulating a digital camera is a complete dead end.

We can skip that and assume that the tristimulus render values are “idealized”, plus or minus spectral model rendering versus tristimulus model rendering.

They don’t. Electronic sensors are more or less linear. Again, digital sensors capture camera observer tristimulus, that we transform to standard observer tristimulus. That challenge of image formation is in the realization that tristimulus data is not an image, and can be visualized quickly via the blue ball / cube example.


Sadly, this answer is flatly false. Standard observer “Brightness” for lack of a better word, is directly related to hue. This is part of the problem present in the blue sphere example.

100% correct here! The problem is that when forming an image we are straddling a representation inspired by radiometry, as transformed through photometry and visual appearance. The latter is the part that makes this extremely challenging. The hard science side is trivial!

Equally sadly is that the definition is a rather archaic definition of “brightness” that leans solely on luminous intensity as the sole facet of brightness. More contemporary research has shown this is false.

200% this.

If one goes through the transforms, and avoids the complexities of spectral versus tristimulus nuances of differences, one ends up at open domain tristimulus values.

We can skip all of that complexity and focus on pure BT.709 renders and see that the problem, even with well defined tristimulus values that can at least have the primary match perfectly, remain the crux of the problem to solve.

As above, in comparing the two images formed.

Indeed. Camera captures should be considered an additional complexity / problem layer. They provide no help for the core question as to “How to form an image?” and most certainly only confuse the subject. For all intents and purposes, it can be helpful to consider a camera capture as nothing more than linearized tristimulus values, subject to clipping.

100% correct as best as I can tell. To start with the canon, it’s always prudent to start with the CIE definitions, as they are the authorities on the matters at hand. They too have some ambiguities of course, because it is a damn challenging surface!

The three primary, and still somewhat problematic terms are:


attribute of a visual perception according to which an area appears to emit, transmit or reflect, more or less light

lightness, <of a related colour>

brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting


Lv; L

density of luminous intensity with respect to projected area in a specified direction at a specified point on a real or imaginary surface

where Iv is luminous intensity, A is area and α is the angle between the normal to the surface at the specified point and the specified direction

David Briggs has some incredible demonstrations on his website worth looking at while pondering the subject. He is a painter, so it should be no surprise that he has explored this area rather extensively; most folks who are trained in image work study and have an acute awareness of these issues.

TCAM V2 has a chromatic attenuation component as the values move up in terms of their radiometric-like output. Note that there are still appearance based issues here, such as Abney effect that would need to be corrected for (blue strip will appear purple as the chroma is attenuated).

Also note that the attenuation in TCAM V2, while solid, somewhat still doesn’t address overall brightness concepts, as we can see with yellows being attenuated potentially too rapidly with respect to other mixtures. Still, arguably one of the more solid offerings at the moment. Still, a massive amount of room for improvement.

This is again, avoiding the problem of forming the image! It’s seductive, but the simple problem is that a display cannot represent the tristimulus of what is in front of us, and further… it shouldn’t!

Think about standing next to a camera and looking at a light as it gets brighter and brighter. Your visual system would constantly adapt! This isn’t great when we consider the years and years of incredible photography printed to mediums that have beautiful imagery, with smooth transitions from no exposure, through middling values, and gradually depleting to paper or projected achromatic.

This is why all attempts that get too bogged down in the visual system lose sight of the medium. A watercolour is a wonderful medium not in spite of it’s representation limitations, but because of; it’s the watercolour response to paper, and the limitations within that, that make the formulation of imagery within it incredible.

“Hue skews” are tricky. “Hue” is a perceptual term, meaning it is based on our sensation of appearances. Arguably, relative to our sensations, hues do not skew, and rather it is a flaw in how we formulate our imagery. Abney effect on blue is trivial to see for example. That too could be considered a “skew”, albeit arising for a different reason. Should it be “corrected” or not? Where does this fit into the protocol and pipeline?

Sadly not the case as said above. Those answers are tragically devoid of study.

Filmic skews too! This is why the monumental shift away from accidents is required to move forward in a contemporary manner.

Way back when I was trying to create a transform for the small group of image makers working on their own creative stuffs, I explored the attenuation using chromaticity linear approaches and wider gamut rendering etc. For a number reasons, I was never able to make it work in a way that I could reasonably justify. It was just more garbage. It is at least feasible to address the issues, but the solutions are not exactly as straightforward as some might believe. And plenty of open questions!

Hue is a tricky one and we have to make sure that we aren’t lumping all of the manifestations of “hue skew” under the same umbrella; the causes and needs vary. If we were to iron-fistedly assert that the tristimulus chromaticity angle “never skews”, it would result in perceptual hue skews. Conversely, if we assert that the perceptual hue “never skews”, the chromaticity angle in terms of light transport-like mechanics would indeed skew!

Skim through it and see if it addresses brightness in the fundamental manner David Briggs addresses above. It is a well identified subject, with virtually zero to no explorations in terms of implications on “tone mapping”. Likely because the authors often fail to do their due diligence and explore what the term “tone” means.

Sadly, that loops back to the century of research from people like Jones, MacAdam, and Judd… who were specifically interrogating the nature of image formation!

It’s always ironic when the very people who criticize questions and interrogation are the very folks who should be doing so.

Feel free to do whatever you think “works”.

For the rest of the folks who actually care about their work and forming imagery, they will hopefully find the subject fascinating, as others have for over a century plus.

After all, if it doesn’t matter, just turn your display or lights off.


More answers then questions this time. :+1:

I read through 90% of this post and learned a lot, but it saddens me to see it still not in master after more then 3 years of “discussion”.

Also only people who really do care about something gets frustrated about it when things aren’t moving too much, turning the display/light off won’t solve the issue…

Please note that none of this discussion is irrelevant; spectral rendering and the attenuation of energy? Awesome!

Using it as the basis for image formation? Unsolved, and arguably highly problematic.

Also bear in mind that there are probably a number of people you can count across two hands who are paid to daily focus and work on details related to this.

The folks at the Institute are working and busy. And the folks not at the institute are busy and working. It isn’t like @smilebags and @pembem22 are getting paid to focus on spectral. Are they doing tremendous work in their limited life time? Absolutely.

I hope you didn’t read my intentional snark as dismissive. I too would love to give everyone a solution that works. I really would. I’m sure @pembem22 and @smilebags would love to get things fixed and power away full time on it!

In some cases, specifically the show stoppers I am talking about, there are just poor options. In the void of a better solution, with clearly defined “What is better?”, the current solution is probably a push. Just leave it be.

That is, if I’m going to offer you a solution, I want to be able to have it work for you and your creative image making needs! Not build up a box of excuses that fall apart! I don’t want you pointing out some horrific breakage that I am responsible for! I want you to feel creatively empowered.

That said, please appreciate too that it isn’t like a complete deadlock. The ideas around “brightness” are actually moving forward, and that is a massive deluge of good news.

As for spectral getting integrated? See above. It comes with challenges of time and design, and we all need to consider the design of Blender as a whole and how much it could disrupt things etc. Those are real design challenges that need to be thought about.


Hm ok.

Whats about a dynamic tonemapping?Is photopic Eye response curve usefull for tone mapping? I guess yes and no.No,Because we all as observer have the Eye sensitivity allready everytime.And Yes if you have a LDR Monitor we have the same problem what Filmic has with the bright colors,that need to be displayed from HDR to LDR.

If you have maybe a mid contrast HDR as lighting,then you dont need a agressive compression (curve) as with very high dynamic light range.

If the tonemapping algorithm can measure the max light in a frame, it could use a dynamic curve ? even more maybe photopic Eye curve at a typicall brightness.And maybe scotopic Eye curve if the light is very low?

I guess there is not one ideal curve for every light conditions,then only a dynamic algorithm could better fit to different light ranges?

What are we trying to do? Emulate the HVS or form an image?

This approach is easy to demonstrate if we think about the “brightest” object in a scene. To avoid arguments, let’s say we have a sun that we are pointing the camera at, and we are exposed on a face in a car driving. As the car drives, the sun oscillates in and out of trees and leaves. Guess what the implications are for an image.

Further, imagine if someone were drawing or painting the person in the scene, and were attempting to communicate the screaming hot sun crushing down on them as a backlight. Does a dynamic HVS response do the image justice here?

I would argue that while image formation depends on the HVS, but it is not absolute emulation thereof. We need to respect the medium, for all of its capabilities and incapabilities, and formulate an image to it.

See blue sphere example to really hammer that point home perhaps.

Maybe it should be included as a “Look”? Recently I have been checking out OpenDRT (though not able to test it in Blender yet due to its lack of OCIO version), I think I saw a “Notorious Six Look” file name in there, so it seems it even included a “Notorious Six Look”, maybe the de-Abney effect and so on can also be looks to apply optionally?

It likely should be default, but optional. There are methods that allow us to apply perceptual facets on top as a layer, leaving the light-transport-esque formulated image as chromaticity linear attenuated.

There are a number of places where this is useful, such as folks who want to grade or manipulate the image state, or video walls to be rephotographed, etc.

1 Like

Troy,i think this is the housemade problem with Filmic and TCAM V2.As clever it is to desaturate the color towards the whitepoint with increasing value,it mangles the color as you always say.

If you want to keep some colors saturated as is,then you have to change the method somehow.

You know the classic 3 shots with 2stops + and - from EV0 and combine them in your graphicapp.

I think as old it is maybe a good starting point.As idea,keep the important midtones at EV0 and blend in at overexposed data from the -2 shot ,that has its bright colors you want.

I now this sounds maybe to simple but you use the colors as there are shot or rendered,and you want that right?I mean you have the datas you want in the shot stored.Except even the HDR shot is clipping ofc.

Now its up to you how to make use of the data,hehe.

And another idea,if you want to make use of the full range of maybe 16 stops.I would do the classic -2 for the main colors within 2 stops,and useing filmic at the pixels that going above the 2 stop.this way you can keep some bright colors,and everything its way above get the filmic desaturation curve.

Hope you get the idea.

You miss the point; the medium is limited. You simply cannot express the range, and the range isn’t what is important in the first place.

But feel free to give things your own attempts at image formation. It will all become very clear.

Probably going to be tricky to design something when a definition of “bright colour” is the question at hand. By definition, this attempt would darken things in a rather surreal manner.

Sure we know that we all looking TV on LDR or the monitors,even in cinema the projection form the film.There was a statement ,if you can make a photo from it,you can render it.
Have this changed?

Tbh my postings are just ideas what came to mind,maybe there are useless or maybe you get a new idea from it,thats all.I think you have read almost all new paper from this tonemap topic?

If we can,help us to help you, to get brainstorming better ideas.

The thing is, it is impossible for the medium (your monitor) to have high intensity and still saturated color. If you try it you will end up having the “Notorious Six”.

Even if you have a medium that can do it, the question is “Do you really want to lose the ability to overexpose things?” Just think of how overexposure has been such a creative tool for artistic purposes, I think overexposure is a very important part of the artistic side of image formation.

Considering both the limitation of the medium and the artistic situation I believe path to white is already the most sane solution here.

1 Like

I don’t know if I understand this. A photograph in the film sense, formed an image. A photograph in the digital capture era is not a formed image, and rather a capture of stimulus relative to a digital observer sensor. I see them as quite different things, hence why I don’t quite understand the statement?

Of course not! I have read more than a few, and the few that define “tone” seem to be solely focused on the idea that luminance forms all of tone, which I currently do not believe is correct.

I couldn’t applaud this framing any more! This is precisely one of the facets at work here. I’d go so far as to even say that the term “overexposure” is a tad of an overstep!

When we use subtractive mediums, there is literally no “overexposure”; we vary the depth of a filter against some constant illumination / projected source. That is, what we see is a continuum from maximal filtration blocking (projected creative film) or reflecting the filter (paints) to minimal / no filtration. It’s a continuum here, with no clearly defined “overexposure”.

With that said, I completely agree with your summary. Perhaps this is why image formation systems that equate image formation with an emulation of the human visual system fail rather profoundly? Not sure! Still learning!

Note that in this case, we could describe a “path” in terms of colourimetry xy coordinates. It could answer whether a given chromaticity march in a straight line path to achromatic?

I don’t believe that the discussion of subtractive-inspired mediums implies a “path”, but more specifically addresses an appropriate “rate of change”.

In the blue sphere example, we have a number of things going wrong. Of course, we can say “the illuminating light should be peak display achromatic!” But that feels like only a portion of what is going wrong here.

If we look at how the result, which is basically some “curve” applied to the source tristimulus blue channel, we can see a deeper problem. Sure… we will likely escape the range of the medium’s blue emitter, but also pay attention to the rate of change of the blue! It goes through regions that do not properly communicate, as best as we can guess, the rate of change of the illumination across the surface. The other interesting facet here is that this rate of change varies in terms of purity and hue angle; if we did this with a yellow light source, applied precisely the same curve, we’d end up with a different apparent rate of change!

This is why my personal, completely irrelevant and anecdotal belief is that if we are trying to crack this spectral image formation nut, it requires disentangling whatever the heck brightness’s relationship is with hue and purity.


I haven’t been following this discussion for quite a while. Two simple questions:

  • Will the spectral functionality be merged with the main Blender’s Cycles anytime in the near future?
  • Has the spectral functionality been updated to the new Cycles (X) ?

I hope we are able to, yes. There are a few things to iron out first, and of course there’s a good chance there will be some challenges which will come up after getting a review from the Blender devs, but a lot of the core features are in place, there’s just a bit too much bugginess for it to be acceptable at this stage.

Yep, @pembem22 has done all of the hard work in migrating it over to Cycles X.


Have you seen this?The results looking quite good to me.

Retina inspired tone mapping method for high dynamic range images

If you want to download the HDRIs used in this paper for your own testrenderings,here they are