Thoughts on making Cycles into a spectral renderer

I completly agree to the first block you wrote.

Why not? Troy was asking

If you have the same primary Filter useing specific Digital camera in your Blender camera with same Primarys CM.Then in theory it should render the blue object as if its was taken with the camera.

Sure.Think about that the raw files from a camera very similar to a exr rendering,or it should similar depending how accurate the renderengine with its materials and setup was used.

Remember if there is no Filter or foveon layer or a prism system to get the trichromatic seperation in a digital camera,you would have a monochromatic picture.

Since we have always RGB channels in a EXR render we have a sort of Filtering or CM already.

The possibility to change those CM to Camera “Filter or CM” would be nice.Until now we can not change or select anything of this.

Hmm you seem to be talking about the camera response function thing? It was present in 2.93 Spectral branch, the default was set to CIE standard observer. I guess it is too advanced for the initial merge to it is not in 3.0 spectral branch, Smilebags once mentioned it as:

But I think it is separate from the issue we are talking about, about a better view transform we need in spectral branch.

My understand of that question is quite different here. In my understanding, he was trying to get people to make decision about image formation. And this question should be combined with the second and third question:

I believe he is talking about RGB rendering here instead of spectral, and he is talking about image formation. Look at all three questions, don’t they sound like the color sweeps thing?
image

He is basically asking, how should the sweep look like? As the intensity increases, should the bright corner remain pure BT. 709 blue? Or should it path to white like Filmic?

My understanding is he is trying lead people to understand the complexity of image formation using simple questions like these, I don’t think it is really about camera filtering etc.

Ah is see.I think it depents on camera settings,especially exposure.If you want to have the same result as a film or Photo taken.Think about a HDR foto or video footage, you have to use a grey point to have well exposed image.

Maybe the filmic curve is one of the best compromise you can get today.

How would the blue look if you underexpose the image ? i guess the white/bright part gets its blue back,towards the left.

As i found in the net.If the photons increase,the brightness increase (amplitude of the lightwave) but should keep the same hue.

In math terms E electric radiation multiplyed by the lightwave.(lamda)at light speed c and freq

yes kind of.Just to use the same primary Filter as in the given camera.Ofc you can make it as accurate as you can get.

The use of the word “brighness” is introducing ambiguity. What does it mean? Does it mean the intensity? But the max emission power of your monitor is limited and set in stone in its hardware (which is why Troy emphasizes image formation needs to consider the medium. If you simply view the brighness as amplitude of the lightwave, you are ignoring the medium therefore ignoring the actual image formed on the medium). But because of our wanky perceptual system, two colors with same intensity might seem like they have different brighness, how about this? Troy has been researching related topic recently about this. It has also been mentioned in this thread previously, the “greyness boundary condition” thing etc.

I think this is the “Chromaticity Linear” that TCAMv2 achieved:

I am talking about Radiation and its reflections in a renderscene.

The medium is important and not.If you have a relative calibrated monitor or TV then you should see sRGB material from black to white with all colors.Except ofc if some cheap old LCD have a very low bitdepth for display.

The ideal would be every device has its own best image transform.But i think this could only happen in years.Or maybe as app/plugin who knows.

Not really. As I have said before, you can have Nishita sky using real sun’s strength, but you can never have a monitor as powerful as the sun.

Think of what happens if you just assume the monitor can display the hue correctly just because the primaries are the same?
Hue skews:

This is why you are not talking about image formation here.

Ofc you don´t have such bright monitor.I am talking about a displays what can display footage from black to white.If you have a older monitor maybe your max white looks a bit greyish.
How do you whatch TV?I guess you have a LCD or OLED.They all are display sRGB footage,same on monitors.

Not?I have explaned the idea of the camera primarys? and that hue is not changing with increased radiation.?

Not sure what you mean here, do you mean you are trying to display a close domain, already formed image?

The thing is, the ratio between RGB:

Because some channels are “clipping”, the ratio outputted that you see on the screen will never be automatically the same as the open domain scene reflectance. It has to be engineered depend of the output medium instead of focusing on the scene reflectance. The thing here is called “intensity based gamut mapping”. And the questions Troy asked about how should a very bright blue looks like is directly related to this.

1 Like

I know how the filmic transformation works.Although i am not sure if the transformation is fixed or dynamic to every scene.If not maybe a Dynamic system would fit better?.You dont have always a dynamic range of 16.Even the HDRIs you can load everythere have different Dynamic ranges or max light intensitys.

The TCAMv2 looks good as Filmic maybe better not sure.But it works similar as the filmic system right?

If you could measure the max light intensity in a scene,and the transformation curve would fit more subtle or strong dynamicly depending on how high the dynamic range of light is would be better?Like a Human eye is automaticly open and narrow its pupil.

It’s fixed AFAIK. And it would try to compress the gamut as the intensity go up, and eventually path to white when it exceed the upper limit. This is not just a log curve, it’s gamut mapping that needs to be engineered.

I am not quite sure, I guess this would be one of those design decision questions that Troy asked, specifically the second one:

If the range is dynamic, the situation would be “brighter and darker shots will look exactly or approximately the same”.

In my opinion, this is against artistic creativity. So probably you wouldn’t want that. Again I am not sure about this.

What

The only thing I know is hue skew being partly also part of the human eye nonlinear response as well. A lot of classical paintings has this effect as well.

Other than this I failed to understand what everything else is about… I’ll hold on a second until I understood what you guys are on about🤔

What you mean could be a photometry effect.

We are talking about radiometry.brighter light means more photons, means higher lightwave amplitude.The wavelength and frequency stays the same.

you can found this all over the net,like this

yes absolutely, this is physics.

The example bar graph in a previous post shows the sRGB graph clamping at 1.0 causing hue shift (rgb ratio changing), it’s not the same cause, also, this clamping is very similar to how camera works, as they do clamp in this way mostly. If you don’t clamp that it doesnt matter.

You mean this Filmic as default is hurting the experience for people graph?

I have found this paper
Implementation of an HDR Tone Mapping Operator
Based on Human Perception

and this paper
TONE MAPPING OPERATORS: PROGRESSING TOWARDS SEMANTIC-AWARENESS
https://hal.archives-ouvertes.fr/hal-02543939/document

this paper

Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

But this too would be broken.

  1. Camera sensor captures are closer to capturing tristimulus data, but still are subject to clipping.
  2. Tristimulus data is not an image.

TL;DR: Rolling through the spectral sensitivity of a digital camera can only help us match CGI to the mechanism a digital camera captured an equivalent spectral stimulus, but not closer to forming an image. While useful, emulating a digital camera is a complete dead end.

We can skip that and assume that the tristimulus render values are “idealized”, plus or minus spectral model rendering versus tristimulus model rendering.

They don’t. Electronic sensors are more or less linear. Again, digital sensors capture camera observer tristimulus, that we transform to standard observer tristimulus. That challenge of image formation is in the realization that tristimulus data is not an image, and can be visualized quickly via the blue ball / cube example.

EG:


Sadly, this answer is flatly false. Standard observer “Brightness” for lack of a better word, is directly related to hue. This is part of the problem present in the blue sphere example.

100% correct here! The problem is that when forming an image we are straddling a representation inspired by radiometry, as transformed through photometry and visual appearance. The latter is the part that makes this extremely challenging. The hard science side is trivial!

Equally sadly is that the definition is a rather archaic definition of “brightness” that leans solely on luminous intensity as the sole facet of brightness. More contemporary research has shown this is false.

200% this.

If one goes through the transforms, and avoids the complexities of spectral versus tristimulus nuances of differences, one ends up at open domain tristimulus values.

We can skip all of that complexity and focus on pure BT.709 renders and see that the problem, even with well defined tristimulus values that can at least have the primary match perfectly, remain the crux of the problem to solve.

As above, in comparing the two images formed.

Indeed. Camera captures should be considered an additional complexity / problem layer. They provide no help for the core question as to “How to form an image?” and most certainly only confuse the subject. For all intents and purposes, it can be helpful to consider a camera capture as nothing more than linearized tristimulus values, subject to clipping.

100% correct as best as I can tell. To start with the canon, it’s always prudent to start with the CIE definitions, as they are the authorities on the matters at hand. They too have some ambiguities of course, because it is a damn challenging surface!

The three primary, and still somewhat problematic terms are:

brightness

attribute of a visual perception according to which an area appears to emit, transmit or reflect, more or less light

lightness, <of a related colour>

brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting

luminance

Lv; L

density of luminous intensity with respect to projected area in a specified direction at a specified point on a real or imaginary surface

where Iv is luminous intensity, A is area and α is the angle between the normal to the surface at the specified point and the specified direction

David Briggs has some incredible demonstrations on his website worth looking at while pondering the subject. He is a painter, so it should be no surprise that he has explored this area rather extensively; most folks who are trained in image work study and have an acute awareness of these issues.


TCAM V2 has a chromatic attenuation component as the values move up in terms of their radiometric-like output. Note that there are still appearance based issues here, such as Abney effect that would need to be corrected for (blue strip will appear purple as the chroma is attenuated).

Also note that the attenuation in TCAM V2, while solid, somewhat still doesn’t address overall brightness concepts, as we can see with yellows being attenuated potentially too rapidly with respect to other mixtures. Still, arguably one of the more solid offerings at the moment. Still, a massive amount of room for improvement.

This is again, avoiding the problem of forming the image! It’s seductive, but the simple problem is that a display cannot represent the tristimulus of what is in front of us, and further… it shouldn’t!

Think about standing next to a camera and looking at a light as it gets brighter and brighter. Your visual system would constantly adapt! This isn’t great when we consider the years and years of incredible photography printed to mediums that have beautiful imagery, with smooth transitions from no exposure, through middling values, and gradually depleting to paper or projected achromatic.

This is why all attempts that get too bogged down in the visual system lose sight of the medium. A watercolour is a wonderful medium not in spite of it’s representation limitations, but because of; it’s the watercolour response to paper, and the limitations within that, that make the formulation of imagery within it incredible.

“Hue skews” are tricky. “Hue” is a perceptual term, meaning it is based on our sensation of appearances. Arguably, relative to our sensations, hues do not skew, and rather it is a flaw in how we formulate our imagery. Abney effect on blue is trivial to see for example. That too could be considered a “skew”, albeit arising for a different reason. Should it be “corrected” or not? Where does this fit into the protocol and pipeline?

Sadly not the case as said above. Those answers are tragically devoid of study.

Filmic skews too! This is why the monumental shift away from accidents is required to move forward in a contemporary manner.

Way back when I was trying to create a transform for the small group of image makers working on their own creative stuffs, I explored the attenuation using chromaticity linear approaches and wider gamut rendering etc. For a number reasons, I was never able to make it work in a way that I could reasonably justify. It was just more garbage. It is at least feasible to address the issues, but the solutions are not exactly as straightforward as some might believe. And plenty of open questions!

Hue is a tricky one and we have to make sure that we aren’t lumping all of the manifestations of “hue skew” under the same umbrella; the causes and needs vary. If we were to iron-fistedly assert that the tristimulus chromaticity angle “never skews”, it would result in perceptual hue skews. Conversely, if we assert that the perceptual hue “never skews”, the chromaticity angle in terms of light transport-like mechanics would indeed skew!

Skim through it and see if it addresses brightness in the fundamental manner David Briggs addresses above. It is a well identified subject, with virtually zero to no explorations in terms of implications on “tone mapping”. Likely because the authors often fail to do their due diligence and explore what the term “tone” means.

Sadly, that loops back to the century of research from people like Jones, MacAdam, and Judd… who were specifically interrogating the nature of image formation!

It’s always ironic when the very people who criticize questions and interrogation are the very folks who should be doing so.

Feel free to do whatever you think “works”.

For the rest of the folks who actually care about their work and forming imagery, they will hopefully find the subject fascinating, as others have for over a century plus.

After all, if it doesn’t matter, just turn your display or lights off.

7 Likes

More answers then questions this time. :+1:

I read through 90% of this post and learned a lot, but it saddens me to see it still not in master after more then 3 years of “discussion”.

Also only people who really do care about something gets frustrated about it when things aren’t moving too much, turning the display/light off won’t solve the issue…

Please note that none of this discussion is irrelevant; spectral rendering and the attenuation of energy? Awesome!

Using it as the basis for image formation? Unsolved, and arguably highly problematic.

Also bear in mind that there are probably a number of people you can count across two hands who are paid to daily focus and work on details related to this.

The folks at the Institute are working and busy. And the folks not at the institute are busy and working. It isn’t like @smilebags and @pembem22 are getting paid to focus on spectral. Are they doing tremendous work in their limited life time? Absolutely.

I hope you didn’t read my intentional snark as dismissive. I too would love to give everyone a solution that works. I really would. I’m sure @pembem22 and @smilebags would love to get things fixed and power away full time on it!

In some cases, specifically the show stoppers I am talking about, there are just poor options. In the void of a better solution, with clearly defined “What is better?”, the current solution is probably a push. Just leave it be.

That is, if I’m going to offer you a solution, I want to be able to have it work for you and your creative image making needs! Not build up a box of excuses that fall apart! I don’t want you pointing out some horrific breakage that I am responsible for! I want you to feel creatively empowered.

That said, please appreciate too that it isn’t like a complete deadlock. The ideas around “brightness” are actually moving forward, and that is a massive deluge of good news.

As for spectral getting integrated? See above. It comes with challenges of time and design, and we all need to consider the design of Blender as a whole and how much it could disrupt things etc. Those are real design challenges that need to be thought about.

2 Likes

Hm ok.

Whats about a dynamic tonemapping?Is photopic Eye response curve usefull for tone mapping? I guess yes and no.No,Because we all as observer have the Eye sensitivity allready everytime.And Yes if you have a LDR Monitor we have the same problem what Filmic has with the bright colors,that need to be displayed from HDR to LDR.

If you have maybe a mid contrast HDR as lighting,then you dont need a agressive compression (curve) as with very high dynamic light range.

If the tonemapping algorithm can measure the max light in a frame, it could use a dynamic curve ? even more maybe photopic Eye curve at a typicall brightness.And maybe scotopic Eye curve if the light is very low?

I guess there is not one ideal curve for every light conditions,then only a dynamic algorithm could better fit to different light ranges?

What are we trying to do? Emulate the HVS or form an image?

This approach is easy to demonstrate if we think about the “brightest” object in a scene. To avoid arguments, let’s say we have a sun that we are pointing the camera at, and we are exposed on a face in a car driving. As the car drives, the sun oscillates in and out of trees and leaves. Guess what the implications are for an image.

Further, imagine if someone were drawing or painting the person in the scene, and were attempting to communicate the screaming hot sun crushing down on them as a backlight. Does a dynamic HVS response do the image justice here?

I would argue that while image formation depends on the HVS, but it is not absolute emulation thereof. We need to respect the medium, for all of its capabilities and incapabilities, and formulate an image to it.

See blue sphere example to really hammer that point home perhaps.

Maybe it should be included as a “Look”? Recently I have been checking out OpenDRT (though not able to test it in Blender yet due to its lack of OCIO version), I think I saw a “Notorious Six Look” file name in there, so it seems it even included a “Notorious Six Look”, maybe the de-Abney effect and so on can also be looks to apply optionally?