Thoughts on making Cycles into a spectral renderer

Did you reconstruct from the CMFs? How do you spectrally adapt to a display when all you have is xy colourimetry?

It’s approximations all the way down.

I’m well aware of the physical models. I understand your point.

I assure you however, that when you adapt to a scene, that it is not likely because there is some sort of spectral math on SDs happening inside our complex system; it’s psychophysical and as an approximation with existing approaches, it is good enough here.

The CATs work enough to get display output. Again, acceptable approximations, and at some point we absolutely must account for adaptation from the psychophysical side.

Perhaps it’s wise to wait for input from others before responding…

The way I understand it right now (and I might not), I’m not sure what you are saying is going to work @Scott_Burns.
It is technically infeasible for a renderer like this to keep around pure spectral data. I’m not sure that would be needed for what you want to do here, but we don’t actually find a full spectral distribution per pixel that could then be transformed as a spectrum. Instead, each sample is, in my current understanding, converted to scene linear RGB, and then the three channels you get are added up per sample, to arrive at the final raw RGB result which then is tonemapped however the color managemant portion of Blender sees fit.

The only other ways are a crap ton of memory consumption (impractical), or quantization (i.e. binning, likely causing unwanted artifacts), and even if you could keep around everything, the results would probably be very noisy, which might degrade the results you’re hoping for in a different way.

If I understand right, you’d need this raw spectral data after the image is fully rendered to do that “ground truth” transform, right? If that’s correct, that unfortunately won’t happen.

The only exception is if you can do the transform you desire per ray (while the spectral data for each ray is not yet forgotten), and do it fast. This is myriads of samples we’re talking about, so even tiny slowdowns per sample add up really quickly. If what you have in mind doesn’t work with either of those restrictions, it unfortunately won’t do in this setting.

Please correct me on any details I got wrong.

For the most part you’ve got it right. An extra multiply per sample isn’t going to make a huge difference to the render time though, it’s more of a practicality thing at this stage. It’s certainly possible to say “I want all the spectral data to be multiplied with this spectrum” and it can happen, but it’s something that should be configurable, not hard-coded into the engine, and that’s completely out of scope for where I’m at now.

To be as accurate as possible when simulating colours under daylight, we should indeed multiply the spectrum by D65, but this could just as easily happen in the scene itself, where the sky emits such a spectrum, so the metameric effects (if that’s the term) are already present.

If that’s all we’re talking about here - a single multiplication, pre-integration (i.e. can be done for each individual sample), then I don’t immediately see why we woldn’t just go for the more accurate version.
But yeah, definitely in a configurable manner. Definitely don’t want to hardcode a fixed whitepoint in anywhere.

That’s the issue. More accurate is entirely relevant. More accurate is having the illuminants embedded in the lights, but that doesn’t solve the chromatic adaptation for different target spaces if people expect R=G=B to be ‘white’.

@smilebags Can you merge master ? The recent lib changes make it difficult to build your branch currently

Sorry, I’m off for the night. Will do so tomorrow and get back to you. @LazyDodo

oh yeah no rush, anywhere in the next week or so works for me

I threw together a simple test scene real quick, just testing out some extreme cases of this change.

Here you can see three balls (each in one extreme RGB color) on a white (1,1,1) plane. The light source is also one of three extreme colors.

Nothing else was changed between these renders.

Red:

RGB:

Imgur

Spectral:

Imgur

Green:

RGB:

Imgur

Spectral:

Imgur

Blue:

RGB:

Imgur

Spectral:

Imgur

It seems a little weird to me that blue ends up becoming this purple under blue light. But otherwise I like these results.

7 Likes

I suppose it’s worth pointing out that I believe we are talking about two different things, where “one” thing is being discussed.

Within the scene colourimetry, we could say that any number of arbitrary colours are adaptable. That’s not the real issue.

What the issue is, is being able to achieve colour constancy from the psychophysical side based on the output display colourimetry. It literally has nothing to do with the scene at this point. The psychophysical side is somewhat a separate mechanism and related to output contexts in this case.

So I believe it’s mixing apples and oranges to a degree, and even if the idea is trying to adapt spectral, good luck getting the spectral distribution of a random display’s colourimetry.

These are excellents tests.

The code is still subject to bugs. With that said, the last blue case you can imagine that the blue spectra is being hit with a blue light of the same SD. That is, this simulated indirect will tend to sharpen the spectral distribution I believe. In this case, the result could actually end up being outside of the source gamut using the reconstructed primaries. At the very least, the output result will be a different spectral composition, and based on the input, that has a good chance to change quite dramatically. As we increase saturation, we also get plenty of psychophysical oddities such as Abney effect, which is very noticeable in blues, so there’s another complicating axis to this mess.

And because it’s blue, and the whole pipeline is based on 1931 at this point, without the latter Judd Vos corrections etc. that end up in the 2006 CIE CMFs, blue wavelengths are subject to some rather unfortunate twists and tuns.

I’m not certain that that is what is happening here exactly, but the moment we are in spectral, all sorts of things get trickier. This ends up wrapping in gamut mapping and other things. Right now though, there are quite a few larger fish to fry to make sure things are working within the CMS.

1 Like

@LazyDodo I’ve now merged latest master and pushed my branch, you should be able to build now. (though if it’s related to casting between floats and doubles, that’s another issue I plan to solve shortly)

There’s a new build up with better OCIO config (but unfortunately no filmic-like transform available yet) thanks to Troy. I think now it is in a pretty usable state, just don’t denoise or use volumes (or SSS)! I’d be really interested to see some comparisons on real scenes.

https://1drv.ms/u/s!Al91CjdrcExJwrQ6E7VfnFKn-CjjCw?e=4pwgVB

(Edit: added gifs for ease of comparison)

Another test. Please click on the images to see them in full size. This is still with the build 02, so no improved OCIO config yet.

I tried some glass with absorption. - Since volumes don’t work yet, I went with a node setup to kinda fake it.

Node Setup for the material:

RGB:

Imgur

Spectral:

Imgur

Note specifically the internal bounces. In the RGB version, the color shifts much more towards red. It also looks slightly brighter. The Spectral version has a more consistent color as the absorption becomes stronger. You’ll probably have to open them at full size to properly make out the difference.

EDIT: I decided to render some closeups to make it more obvious

RGB:

Imgur

Spectral:

Imgur

The spectral version ends up having more contrast and deeper bounces in the RGB version are significantly more red. Indeed, the entire RGB version almost looks like there is a mild red haze over it all if you flip back and forth between the two.
With a less saturate color like this (this is still pretty saturated but no longer maximally, the color in the center of the gradient is RGB 0.9, 0.1, 0.5, the light source is just a point light with 1, 1, 1 at 1000W) the differences aren’t nearly so noticeable though.
Even so, I definitely prefer the spectral version.

1 Like

Never knew what’s the difference between Spectral and RGB, so I took a scene of mine and turned some lights into red and some objects into green and blue colors, and discovered that till today some of my scenes were off… :astonished:

RGB


Spectral

RGB


Spectral

4 Likes

Note, that these are extreme cases. With less saturated colors, farther from pure red/green/blue, the differences won’t be quite so drastic. But yeah, I like this change a lot. Colors feel like they behave closer to what they would in the real world, even with the simple spectral upsampling approach that’s happening right now. The look is still gonna change as stuff is tweaked and improved and fixed. But even now, it’s great to my eyes.

3 Likes

New build, now on GraphicAll - includes passes and denoising support, still no volumes or SSS though

@kram1032 I hope you don’t mind me using your image.

14 Likes

Well done!

Full props to folks like @kram1032 and @SerjMaiorov; I can speak with first hand anecdotal evidence how profoundly important the people testing and posting imagery are.

Kick the tires. Post compelling work.

3 Likes

One more example of absorption, this time much more noticeable.

RGB:

Imgur

Spectral:

Imgur

three-light rendering forces the image to remain in gamut. The spectral version is clearly out of gamut in some spots, highlighting how a wider-gamut workflow would be quite necessary. - I actually don’t mind the out of gamut bits here though. The patterns in this gem are just lovely and the high saturation really makes it pop.

Just because I really liked it so much, I went ahead and did two more renderings of the same scene with a different light position, only in the Spectral branch:

Imgur

Imgur

This is still build 02 btw, I should really update to the latest version.

1 Like

If the new configuration holds up, I will slowly integrate some of the gamut mapping things I’ve been working on since FB.

I’d like to add that it is important to separate the intention of the chromaticity versus what we end up seeing via the blind transform clip; colour skews result in heavily saturated and wrong chromaticities, typically skewing your imagery wildly.

Great samples. I am curious to see if dispersion in caustics can happen with the current path tracing math?

Really awesome work guys, I’m interested to see where we end up with this!
Just trying out the new build and I’m getting this nasty green render. Tried it with both the GA build and built it myself. Any clues?
image|596x326