Thoughts on making Cycles into a spectral renderer

I think we’re mixing up different things:

  • The scene linear RGB color space has a particular gamut and white point, as defined by the OpenColorIO config. This is a technical choice about how many colors can be represented accurately and compatibility / ease of use, rather than a creative choice about what the final results looks.
  • When converting from an RGB material or light color to a spectrum, this is a conversion from incomplete information and so there are various ways to construct the spectrum. Some per-light user control may be useful here, and default behavior should distinguish between reflection and emission.
  • When converting from the scene linear RGB render buffer to display space, there are various creative choices to be made. This is what the settings in the Color Management panel are for. Giving a blue tint as part of this is reasonable. The display transform depends on the scene linear color space, but not if spectral rendering is used or not.

The scene linear color space in the spectral branch config seems to have Illuminant E rather than D65 as in master. To me that seems to be fixing things in the wrong place if you care about compatibility with existing .blend files or interchange of OpenEXR files with other apps.

If you want lights to have a particular emissive spectrum, modify the conversion of light color to spectrum. For the render buffer, using E or D65 white point should not make any difference in the displayed result if you’re doing a straight conversion to display space (without e.g. compositing nodes that may be sensitive to the white point) and doing all the conversions correctly.

It must be illuminant E to avoid biasing the mixture. It also is baked into the reconstruction RGB ratios.

Not quite wholly correct.

The spectral domain also has an assumed adaptation point, and that point follows through to the reconstruction of RGB.

This is wrong. It must be accounted for as it is a creative choice.

I don’t know what “biasing the mixture” and “reconstruction RGB ratios” mean, can you please use standard color management or Blender terminology to describe things? Googling either of those gives me nothing.

If we’re talking about some kind of bias when accumulating Monte Carlo samples in the render buffer, there should not be any. Conversion between different scene linear color spaces is a 3x3 matrix transform, a linear map that preserves vector addition and scalar multiplication.

I’ve only heard of adaptation of spectral colors in relation to some XYZ/RGB space, not sure what it means for a spectral color by itself.

An explanation about why it’s wrong would be helpful.

I began writing this before Troy’s latest replies and I also don’t know what is meant by “biasing the mixture” or “reconstruction RGB ratios” (but then I know quite little about color management).
It may well be that this post is going to be invalidated by what ever you are gonna explain about those things. But with that in mind, here it is:


Maybe it would help to break down the entire pipeline and see where potential issues lie.

Please, anybody, correct me if I’m wrong or I’m overslimplifying something / skipping a step. This is from memory and as far as I understand it right now:

First up, we have two broad issues for colors in the spectral branch.

  1. Spectral Reconstructions of RGB input colors (the current implementation will eventually be replaced by that parametric Wenzel Jacob take on the issue)
    (this is designed for reflectance)

  2. How to interpret such reconstructed spectra in the context of light sources

Let’s put a pin in 1 as a known issue with a known eventual solution.
But 2 has a number of complications:

  • sRGB uses D65
  • (1 1 1) white should mean a light source displays as white
  • but for reflectance, (1 1 1) white should mean a perfectly reflective material
  • the way Cycles nodes for light sources work, colors are being applied twice
  • multiplying a D65 spectrum by itself isn’t normally going to yield the same spectrum, so double-applying shifts the color away

The current solution to that is:

  • render with Illuminant E (this deals with all the complications but the first)
  • Colormanage the result into D65 (this fixes that part. But the look isn’t identical as primaries end up in slightly the wrong place.)

Is this the best approach? Are there other solutions?

What if we do treat emission and reflection differently?
Relying on the current spectral reconstruction idea, using simple constraints on three primaries, can this approach be adapted?

Perhaps we could have, for now, two different spectral reconstruction models using almost the same technique.

  1. Reflectance and Transmittance. Same as now (until the other technique can replace it)
  • Three lights (linearly combinable)
  • least slopes constraint
  • sums to 1 (so “Illuminant E white” but this is not an illuminant)
  • each of the lights corresponds to scene linear saturated red, green, and blue respectively
  • must not be less than 0 anywhere
  • must not be more than 1 anywhere
  1. Emission - here we don’t have anything yet.
  • Three lights (linearly combinable)
  • least slopes constraint
  • sums to D65 (or whichever white point is correct)
  • each of the lights to scene linear saturated red, green, and blue respectively
  • must not be less than 0
  • total energy needs to be such that the chosen white point would come out to exactly (1 1 1 ) white for that color space (but individual wavelengths may go beyond 1)

@Scott_Burns do you think a setup like that would be reasonable? Even as I don’t expect that to be final (I really don’t know what best to do with light spectra though), if there isn’t any very obvious flaw with this, it’d seem worth a shot to me.

Then just pick the appropriate sort of spectral reconstruction: Emission shaders (and wherever else emission occurs) get the emission variant whereas all other shaders get the reflectance/transmittance concept.

Additionally, for lamps only, the behavior actually would have to change depending on whether nodes are off (the color then needs to be treated like an emission spectrum) or on (in this case it actually has to be treated more like the reflectance spectrum because otherwise we get a double applied D65)
– this may mean, that the results are not identical if you don’t just use any neutral grey value light for the outside-the-node-network color. But for any grey it should match.

It may also make sense to consider an alternative for scattering spectra. I’m not sure what the right thing for those would be.

One drawback of this particular spectral reconstruction approach is, that we’d have to do it once per color space.
Another is, that it basically breaks for larger color spaces. So by no means is this a final solution.
However, at least the separation of Reflection and Emission (and possibly Scattering), independent of what actual spectral reconstruction approach we pick for each, ought to still work like that, right?

This is the crux of the issue, and why it does not work.

Imagine illuminating an RGB reconstructed surface with a proper D65 spectral power distribution source.

TL;DR: Provide a creative white point balance and the entire problem is no longer a problem.

This is the “bias” I was talking about with BT.709 primaries being used as energy-like light transport; all values are biased under multiplication to the Illuminant.

I don’t see the problem you seem to imply here.
If you light a grey (reconstructed but, by the constraints, spectrally constant) surface with a D65 spectral power distribution, it will look exactly as it should.
And for anything that’s colorful, the details depend on the spectrum anyway.

Can you explain in a bit greater detail please?

If you hit a D65 reconstructed R=G=B with D65 Illuminant, it’s a double up.

Right, which is why I said not to do that. The reflectance spectra would be reconstructed exactly as they are now, “using IE” - only emitters use the D65 reconstructions. The whole point of that post was to use two different color models for different shaders. (Or I guess technically for different closures?)

Code-wise it’d presumably just mean to duplicate the current spectral reconstruction function, call one _Refl and one _Emit, plug in the values accordingly, and then make sure the emission closure calls the _Emit version.
I think.
(I haven’t looked at the code and don’t know if there would be more complications here)

The difficulty (for me, more experienced programmers may not find it a challenge) is that currently the automatic conversion from RGB to spectrum (which allows you to plug a colour into a spectrum socket) happens in the same manner as all the other automatic conversions (RGB to float, float to RGB, etc) so there’s no context of where it’s being used available. Maybe some people with better understanding of Cycles may be able to figure out how to make this distinction but it would be a challenge for myself.

at what point is the conversion made? I’d think each closure asks for it? Can you show me where the relevant code would be on GitHub?

This is something that @pembem22 implemented and I only looked over to improve my understanding of it but I’m not sure exactly where in the code it is, maybe @pembem22 might be able to tell you.

I don’t know if the first point is really a requirement, but would be nice to have that option. For motion graphics type use cases that need this I’d imagine you would disable spectral rendering though, just like you disable Filmic now.

The second one is most important.

If there are no spectral colors in the shader nodes, all you need to convert are the close weights and closure evaluation results, where you know which type of closure it is.

What is a creative white point balance exactly? An addition to the display transform? An option to modify what the scene linear color space is? An option to modify the interpretation of material and light colors?

It’s not obvious to me how any of them solve the entire problem, what you would use as scene linear color space alongside it, and how that would work with interop of OpenEXR renders, textures, materials, etc.

Thing is, if you are just using a color for your light source, everything’s gonna be unexpectedly off-hue if you don’t do this. Illuminant E looks weirdly pink if your white point is D65 and that’s what the color white’d become then. No doubt that’d lead to countless questions.
– that said, it’s easy enough right now (and probably could be improved further), to simply provide something more appropriate, such as an actual emission spectrum of a reasonable blackbody temperature. It’s just that, to do so, you need to use nodes and can’t just set it and forget it with a simple color as would currently be possible.

It would be a technical / creative choice of how to adapt the open radiometric values.

As you pointed out, there is an implicit adapted white point in the working space. The ability to choose how to interpret that white point is what the UI slider / box / whatever is the missing piece of the puzzle.

Some folks might use this “creatively”, such as when setting a white balance to be warmer or cooler in a camera.

Some folks might have a deeper understanding and need, and wish to interpret the radiometric-like adapted white as another chromaticity for some specific technical detail.

Both cases are valid, and a UI control would “close the loop” here. Sane defaults of course, but as Blender matures, the ability to deal with these complexities on the horizon would be terrific.

A robust UI would be probably a basic CCT slider and some decent enough algorithm (none of the CCT algorithms are ideal) or a CIE xy chromaticity (the only robust method of selection) as the primary means of selecting the white point, and a choice of adaptation (Bradford vs CAT02 for example).

Ok, but can you say which of these it is?

An addition to the display transform? An option to modify what the scene linear color space is? An option to modify the interpretation of material and light colors?

And what would be the scene linear color space to along with it, still Rec.709 with Illuminant E as in the branch?

It’s not obvious to me how any of them solve the entire problem, what you would use as scene linear color space alongside it, and how that would work with interop of OpenEXR renders, textures, materials, etc.

And I guess you are aware of the existing White Level setting under Use Curves (which could be exposed outside that panel)?

When two sockets of different types are connected, a ConvertNode is placed between them. Later it is compiled as an RGB to spectrum node. The code is here.

I think this automatic spectral reconstruction type selection could be done by giving some kind of “reflection” or “emission” flag to the closure sockets and assigning them to the node subtrees accordingly. Then the appropriate spectral reconstruction function will be selected at the compilation stage based on those flags.

2 Likes

can the ConvertNode be made aware of the node it’s supposed to be going to? Because if not - and honestly I think that’d be the better approach anyways - it’d make sense to give different irl spectral types different node types as well. Explicitly carry through that we’re dealing with a reflectance spectrum or an emission spectrum. The two types would just convert to each other with the identity function (anything else wouldn’t make a whole lot of sense), but their behavior is different if any other compatible thing is plugged in.
They wouldn’t even have to be distinguished visually. The only difference is whether the socket ends in an emission closure.

And perhaps for convenience/clarity it’d make sense to provide a standalone node that does this conversion in either way too. Like, a RGBToSpectrum node that has a toggle Reflectance/Emission. - Just in case you actually want to manipulate specifically an RGB-based emission spectrum further. (Otherwise I think there wouldn’t be a way of doing this)

Perhaps we should have different sockets and connection colors for each of the spectra types? Makes it less confusing for the end user too.

What does any of this massively complex overhead gain?

I would prefer to keep the spectrum as reusable as possible, since I don’t see any benefit of artificially restricting what can be done with a spectrum. If we need to manipulate them in some way when using them in a particular case (like lighting) then that can be done but I think it makes sense for it to be explicit and able to be disabled.

Separating them by type has some benefits, and could have some notable ones, such as being able to retain the luminance of the original RGB triplet for saturated colours which isn’t always possible for reflectance spectra in wide gamut colour spaces.

This being said, I don’t necessarily think distinguishing them is particularly beneficial as an artistic tool - if someone wants to use a liquid’s absorption spectrum as an objects reflection spectrum, I don’t want to stop them.