Thoughts on making Cycles into a spectral renderer

Nope. I’m talking about indirect light. Any light that’s not directly coming from a light source. Certainly Complex-IOR materials involve this. But so does any other shader that isn’t transparent, purely black, or an emitter. Like, even diffuse/highly rough surfaces will act like this.

That said, certain optical effects really amplify the visibility of such. For instance, the volume absorption shader is effectively really just a continuous version of ever deeper reflective bounces. So very optically dense absorptive objects are gonna color which ever light passes through them further and further towards which ever color they absorb the least.

I know what you mean,the bounce lights or better caustics.The amount of light that is reflecting from a lit material due Fresnel with maybe 10% at a angle and reflecting at a wall with its own Fresnel with only 4% maybe at the Wall angle and reflecting maybe at the ceiling with 1%.

For this reason the bouncing/causics of the reflecting light calc should be as good as possible.With cubic light falloff.I think this makes the most influence of natural light in a render.

You can see how important this is,if you look at the Screen Space Global Ilumination addon for Eevee.
With or without bounce light.

Here a example if you have almost no reflection,with vanta or musou black.This absorbs over 99% of light
Look this Video where he painted a box with a light in it,no reflection.

1 Like

I don’t believe it has anything to do with V2; the transform is broken. The default configuration was engineered to work in tandem with the spectral reconstruction. The reconstruction cannot be separated from the default transform.

All of the general aesthetic analysis is all somewhat moot beyond values that make their way to the display 1:1, which is completely not the case with Filmic or any other transform.

It’s just a broken idea to use anything other than the default configuration that was shipped. :man_shrugging:

1 Like

not exclusively a Filmic issue:

Happens with Display Native too. If anything, that looks even more reddish. But of course that doesn’t mean this wasn’t accidentally broken. Certainly things look right if spectral is turned off.

Also, if you render out anything in the spectral branch before this, and then put the exr into another version of Blender, where Filmic is available, you do not get this reddishness problem.

Filmic may have a problem with out-of-gamut colors. But it doesn’t generally add colors to grayscale scenes.

All I can say is that I generated the transform to account for chromatic adaptation. It seemed to work, but perhaps there was a bug. In theory, R=G=B should be achromatic at the display.

With that said, the previous nuances about adaptation still stand; a proper D65 spectra will appear blue relative to the proper transform output.

And again, it is indeed possible that I made a mistake!

As said, in earlier versions (like, literally the second most recent version), it did work. I’m guessing that already used your chromatic adaption, right? So I think that was fine. Definitely something wrong now though, yeah

Also, if these color management options do not change at all between RGB and Spectral rendering, shouldn’t this redness issue also occur in RGB Cycles?
Honestly it really does look more like the thing that fixed the pinkish results in the past has accidentally been removed.

Here’s the result for 6500K spectral blackbody in the current version:

and in the previous one:

clearly there previously was a slightly bluish tint to it when now it’s close to neutral white as it was way back when

If you swap out the current config with the older, it works properly?

How would I do that? Is it just some folders I gotta swap around?

Indeed. Just go into release/datafiles and copy / backup the colormanagement folder. The other option is to keep a copy of the configuration you want to use and export OCIO=/path/to/the/configuration/config.ocio if on Linux.

Ok replacing the folder with the one from the previous version (which doesn’t have filmic) yields the correct result.

Meanwhile, copying it over from regular Blender yields yet another take. Definitely pinkish but not as much?

(That said this is clearly broken in other ways: Literally the only option that shows up when doing this is Standard view transform with None look)

Both of these are with just a white light source without any nodes. Literally the default scene except I added a plane. No Filmic either

This example is exactly why the configuration I put up cannot be mixed and matched from the spectral reconstruction. Everything is broken without it.

I have been thinking about this a lot, I still don’t quite understand. Sorry, I know this was a post from way back, but I think I just need to ask.

My first question is, why should a pure green light have a little red in it? If it has a little bit of red then it is not pure green anymore, right?

And there comes my second question, in regular Cycles, in order to reproduce this “Black & Green” effect, you need to be very precise on the color of the light, you need to make sure it is 100% pure green; if the user just drag the color dot on the color wheel to a random green color, like what I usually do, it would have a little red and blue mixed in it, and the render would display normally. So does it really make sense to make the pure green to have a little red in it? If you don’t want that “Black & Green” effect you could always remember to avoid using 100% pure color, right?

And then comes my third question, is it true that green lights in real world would not make red objects look black? Green light makes a red object look black, that has been the common sense to me, because that was what I learned back in middle school physics class. In fact, if you Google this, most, if not all, results will tell you it would be black, I cannot find a single source talking about it being brown.

If you say, that was just for extreme situations, normally you cannot get a pure green light, so most green lights in real world are not pure, then it comes back to my second question. You could always avoid using pure green lights in softwares, right? Just like you can bevel every sharp edges of the mesh, to avoid the 100% sharp edges because they don’t exist in real world, right?

What is “pure green”?
If you’re talking spectrally, that’s gonna be some specific wavelength (though there is a range of wavelengths which may justly be called “green” in that we decided to call the percept of those wavelengths by that term.)

If you’re talking what sRGB somewhat arbitrarily calls (0 1 0) green, that just simply isn’t a notion of green that corresponds to any one wavelength. It’s necessarily a mixture of (spectral) colors. Quite far, in fact, from maximally saturated.

Like, this (the triangle) is sRGB:

All the colors a well-callibrated sRGB screen can display are inside that triangle. Outside it, we have colors that we can perceive but a computer screen can’t show us.
As you can see, none of the three points that define the primaries (red, green, blue) are touching the outside edge. They are maybe kinda sorta close to 450, 550, 650 nm as a very rough rule of thumb, but if you actually looked at those three wavelengths, they would be far more saturated than what a screen can do.
Green in particular is pretty far away from the limits of what we could see.
To generate it, you’ll have to have some amount of light towards the red and blue portions of the spectrum.
There is no unique way to do this. In fact, there are infinitely many ways. But none of them will allow you to exclusively use green wavelengths (in that you would unambiguously call them “green” and not at the very least perhaps yellow or cyan if you saw them on their own rather than mixed together)

And even if the three colors were spectral (there are color spaces that do this), you could still only generate any color within the triangle the three colors define. Because the full space has this weird horseshoe shape, no single triangle of possible colors would accomplish giving you the entire space. So long as you don’t directly generate all spectral colors, some colors are going to have to give.

So the bottom line is, spectral green may be “pure” and then, yeah, there’s not gonna be red light in it. sRGB green, however, just simply isn’t “pure” in that strict sense, by construction.

I mean yes. You can, indeed, lower the saturation to mitigate these effects. The closer to grey you get, the less of a problem it’ll be. (In fact you don’t need to go away very far at all to get at least pretty close. I think a saturation of 0.8 is not an exact match yet but if you don’t directly compared you won’t notice anything amiss, usually)
This is what it means for spectral rendering to matter most with saturated colors.
The more noticeable deep bounces are, the more this matters though. (Simply because colors get more saturated as you bounce deeper) - for instance, absorption is quite affected by this.

Once again depends on what you mean by green light as well as red objects. sRGB green? Your average red object (say, a tomato) is gonna look brownish
Oh I can even show you a good example:

In vertical farms, they usually use magenta lighting in order to save power. No use in blasting leaves with green light if it all just gets reflected away anyways. Go for the stuff that actually gets absorbed to contribute to photosynthesis! - Which is red and blue light (Perhaps some UV and some IR)
On the right sight in the image you can see that the leaves here appear mostly brown, except where some white light hits them to reveal that they are actually a pretty darn healthy, saturated green.

Alternative test: Try shining a red laser (so a pure, spectral color) on some green or blue object. The more saturated and the farther away from red, the better. Are you able to see the laser light on the surface of the object? Yes? Congrats, you have shown that the object’s color actually reflects that sort of light. It may well be far dimmer, but for most materials it’s still gonna be visible.

It’s only once you get to rather extreme engineered cases such as vantablack where you can actually have intense laserlight just be swallowed by the material entirely.
– That said, some pretty common real world materials have colors that are out of gamut for sRGB!
In fact, the classic MacBeth chart contains one of them. The turquoise/cyan on it is out of gamut. Too saturated to fit in sRGB (if you happen to light it with white light)

Click here to see the MacBeth chart under a variety of sRGB-based lighting conditions







If you check it out, not a single one of those swatches is black. Not even the cyan under red light (top right)

It’s also quite noticeable under some of these lighting conditions, how colors that look basically identical under white light can look very different under other light conditions. That’s called metamerism and the reason for that is the aforementioned infinite number of ways to generate any one color (given a lighting condition)

If you had a “pure green” object in that it only reflects green light of a very narrow spectrum, that object would indeed not reflect any red light, but it’d also be so saturated that sRGB couldn’t capture it.

And as said, high saturation colors that fall outside sRGB do exist. Which is one situation where spectral rendering becomes doubly important:
The only way to get sRGB to emulate colors of that saturation level is to use negative brightness.
Like for that cyan in the top right of the macbeth chart, you’d actually need, like, -0.03 for the red channel to make it work within sRGB. But that’s completely unphysical and Cycles gives extremely strange results if you try to do this. It’s not equipped to properly handle negative light.

With spectral rendering, you can avoid this. Just give it the actual spectrum. Every calculation will involve positive light, energy will be appropriately preserved, and, sure, the end result will be oversaturated if you blast it with white light, but the actual rendering process will not involve negative light causing things to break and go weird.

Edit: I actually did extensive comparisons to check the difference in RGB rendered and Spectral behavior:

Obviously there were still some mistakes here (hopefully a better spectral upsampling algorithm is going to fix this), but it gives a wide range of cases and really shows at roughly which levels spectral rendering still matters vs. RGB rendering. - In all of these renders, saturation varies linearly from left to right and from top to bottom.
I.e. the topmost Suzannes have saturation 1, then .75, .5 .25 and 0
The leftmost wall has saturation 1, then .8, .6, .4, .2, and 0,
and the light source on top goes linearly from fully saturated to not at all saturated. The only reason you can even see the walls and suzannes in the leftmost chamber at all is because that light source is already down to .8 satuartion on top. But the differences are pretty big especially for the top left suzannes and if the suzannes are green, those differences can remain for quite a long time.

8 Likes

Since all the spectral nodes stuff is perhaps not gonna stand once the spectral branch gets released, from my limited perspective, there are only two things that stand in the way of code review:

  • we need a working Filmic (imo it’d be fine if it was just regular Filmic, with perhaps an update pending. Obviously the current red tint isn’t acceptable)
  • we need good spectral upsampling (ideally, it should be fast and minimize the differences to regular Cycles. At the very least, grey colors should be exact matches, and “pure” (in the sense of sRGB) colors should be very close)

Anything else, as far as I’m concerned, can come later, right?

How close are we to these goals? I really hope a first round of code review can happen sooner rather than later :slight_smile:

2 Likes

Have you seen how to calibrate color saturation on a TV Display or Monitor with the blue mode?
Here a example
https://www.adventmediainc.com/blog/color-calibration-technique
You increase the saturation at the Display until the cyan and magenta at the blue mode have same saturation intensity.If IE the saturation is to high then the cyan is brighter in this blue mode testpattern.

I think we need a similar test for spectral,(what you did with the monkeys kind of)but more precise.Maybe with the values what are used for HDTV.?That values on Material and charts,with blue light?

I made a test,backround light blue only (0,0,1) strength 1 exposure std 0


If i increase the backround intensity or the exposure then the blue color shifts to a violett color.
here same setup with exposure at 4, something seems broken.

for comparison RGB,same setup exposure 4

Due to the army of issues here, this won’t work. I am sure there are spectral tests that could be done, but it would be wiser to test against known values or known imagery.

Maybe you are right,but whats wrong with this test?
The light is working as Filter to emit only Blue on the Testpattern.
The Testpattern (should) reflect only the wavelengths which fall into the wavelength range of the emitted light.

Do you have a better imagery or testpattern with known values.?

These Testpattern values are known (all top row colors have a value of .75)

These Testpattern is better (RGB)


and spectral

There are a few issues. For one, you don’t have the exact emission spectrum of these patterns. For another those spectra aren’t even standardized and different screens are gonna produce different spectra to accomplish the same color impressions.
Third, the spectra Cycles currently uses are optimized towards reflectance spectra (and currently a bit off) - which try to be very flat and broad. So to speak the most boring spectra they could possibly be while accomplishing being their specific colors. Meanwhile, screen spectra (and non-incandescent emission spectra in general) tend to be much more peaky.

But all of these are besides the main issues, I think.
And that’s, that, if I understand what you are doing right, you’re effectively not testing the direct spectra, but you rather multiply, in this case, the current Spectral Cycles blue spectrum with the individual spectra as they appear on these test images.

It works fine if you manipulate RGB values directly because in RGB the three colors are entirely independent.
But spectrally, there is overlap. And that overlap is desired.
And that means, that, if you multiply the red and the blue spectrum, you don’t get black (as you would if you mutiplied RGB Red and RGB Blue), but rather their overlap, which is a kind of dark magenta.
And if you multiply the blue spectrum with itself, you don’t get back the same blue spectrum (as you would if you multiplied RGB Blue with itself) but rather a darker, more saturated, out of gamut blue. You leave sRGB space.

These are not bugs. These are desired and realistic-to-life advantages of spectral rendering!

Because of this – because RGB are spectrally not fully independent, the “correct” way to render this image, within the spectral branch, is to separate the color channels and look at them in isolation. Not to light the image with a given color light source. But then you will probably find that there is no difference (it would be truly weird if there was), and your entire test isn’t gonna give us all that much more information.

This is a texture,ofc they have no spectral values.How do you want to import a textured model into the spectral branch and expect a precise display of it?
Fwiw these are standarized since ages

I know what you mean,but this should only right if you have a broader spectrum(IE whitelight full spectrum,Orange spectrum of near yellow red ect).But if you have a small blue spectrum light only,why should it emit broader spectrum which is (should) not there?

I would agree, if the precision was there as sayed.

Why we do testrenderings then?why do you do your monky renderings,if this is the correct behavior?

Have you compared such tests with a known spectral renderengine if there is the same overlap?

For a complicated texture this would indeed be tricky. For something simple like this? Rebuild it color by color. Just like I did for the MacBeth chart. But that’s sorta besides the point anyway.

Because, as I elaborated here, sRGB red, blue, and green just simply aren’t narrow spectra. In fact they are quite far away from being purely spectral (maximally narrow) and to accomplish them, you pretty much need to mix wavelengths in there that, if you look at them purely, will not look like the desired end color to you. There will be less of those, sure. But not none.
Ages ago, pretty early on in this massive thread, you can look at what those spectra look like, when they were first introduced.

If you do shine a narrow blue spectral light on your screen, sure, you’ll (almost) only get blue light reflecting back. But actually, that won’t be your desired color either. Because then it will simply be too saturated! Going well outside sRGB.

The issue here is what exactly you are testing. Effectively, you are trying to test additive color mixing (i.e. the way emitters work) but you are using a process that is more like subtractive color mixing (the way paint works) or, I suppose, you could call it “multiplicative” color mixing? In the sense that that’s how reflections work.

This is crucially dependent on how RGB colors are being upsampled. Which there are many ways to accomplish. And if you don’t know how other engines do this, you can’t fairly compare them.
But generally speaking, yeah, the better ones will have such overlaps.