Thoughts on making Cycles into a spectral renderer

I guess I’d experiment. Add the three primaries together as you eliminate digits and decide how far from 1.00000 you’re willing to go. The underlying CMF I used had five signif digits. But why not just use all the digits in your software? It’s not like you’ll save bytes by truncating them and the extra digits won’t do any harm.

That’s true, the issue is, on this end, I also have to enter everything by hand. I’ve tried setting up a keyboard macro to copy it in but blender has a strange way of interpreting CTRL, so I’ve had to do everything by hand…

I wouldn’t call this idealized.

Idealized would mean taking the dominant wavelength and centering up on it with a perfectly symmetrical[1] sampling.

Should be more than possible.

[1] Relative to the spectral locus hull, not perfectly balanced spectral ratios.

Already done a few tests. Works fine.

Just haven’t arrived at a method to solve balancing the primaries such that the reference space primaries are precisely located.

My math is too weak on that front.

I consider “idealized” to be the result of some underlying optimization. Nature optimizes all the time. Equilibrium in structural mechanics can be thought of as the minimum of a total potential. I see no reason to why “perfectly symmetrical” would be a worthy goal, when there is nothing symmetrical about the physiology of human color perception.

I think that is the issue with this approach. As the primaries approach spectral primaries, the resultant spectra become further and further away from the goals of ‘bounded by 1 and adding up to 1 across the spectrum’. It neatly solves (if the spectra are positioned correctly) the XYZ locations of the primaries, but doesn’t fit the others.

Idealized in that you can generate perfectly smooth primaries, with compliments included, for any reference space, with fully tunable spectral widths to allow for a parametric approach.

I see no reason why this would be the case.

Reflectance sums aren’t that complex.

If, for example, you’re trying to create REC.2020 primaries, (which are already spectral), in order to get it to have the same reflected power as a perfect reflector (what most people would expect a perfect white material to behave like) the reflectance spectrum would have to be much greater than 1 at the three primary wavelengths, and would equal 0 everywhere else.

REC.709 seems about the largest space which can satisfy all these rules, though I haven’t confirmed that.

2020-like is a goal given that 709 is done like dinner in 2018 as a reference space.

The width of the spectral primary would prevent full gamut REC.2020 upsampling from RGB, but that seems a fair trade-off.

The tunability would be useful in other applications such as attempting to get to a decently balanced width for digital painting in a wide gamut.

“perfectly smooth” in what sense? No discontinuities in slope? Curvature?

“compliments included” I’m sure compliments will be forthcoming for your efforts :wink:, but I don’t see how complementary colors apply here.

“for any reference space” just a matter of reformulating the optimization statement.

“tunable spectral widths to allow for a parametric approach” you lost me there. You want your spectrum to have a specified width?

A spectral compliment. An infinitely small slice of a wavelength essentially lives at the periphery of the spectral locus. You can pull that inwards by widening the spectral slice, or distributing complimentary wavelengths, or a combination of both. Metameric combinations.

In this case, the idea being to be essentially spectral versions of the particular RGB’s primary dominant wavelength, and using the other spectral primaries to pull the near-locus values to the appropriate reference space primary.

I think that would be ideal, as it would allow for variations on a particular primary. At a trade off of course of the “perfect” primary based gamut in the case of something like REC.2020 for example.

I think I understand the source of our disagreement. I am incorrectly referring to “primaries” as being the three spectral distribution components that get weighted and summed to produce a composite distribution. The term “primaries” has a very specific meaning in colorimetry, and one that I think you are referring to above. I’ve modified my webpage to eliminate all references to primaries, and now refer to them as optimal RGB spectral components.

As you are aware, there are an infinite number of metameric spectral distributions that map to the rgb coordinate axes. This metameric suite all map to the same vector within the spectral envelope. What I’ve done is identify the unique triplet of spectral distributions that (a) map to the rgb axes locations in the spectral envelope, and (b) have optimal properties with regard to smoothness (minimal slope magnitude integral) and magnitude (i.e., normality).

I realize I’ve come on pretty strong in my comments above. I realize it’s just a knee-jerk reaction to some of your previous abrasive comments on other discussion sites (e.g., “Especially skip that garbage Burns crap, unless you want to fall into the same looney bin @briend has.” link )

So in summary, what are the actual goals for an RGB -> Wavelengths transform here?

  • fast
  • parametric (technically that’s just for speed, to allow a few simple linear combinations)
  • as smooth as possible (no weird gaps in the spectrum)
  • limited to individual frequency responses between 0 and 1 (Energy conservation)
  • visually as close a match as possible
  • with controllable peaks (to approximate spectrally pure colors)

I think it’s alright if very pure colors (strong peaks) come out as rather dark. Energy conservation ought to be prioritized. That’s how real materials would behave as well, wouldn’t they? You’d have to use specific light sources to hit those specific wavelength to make those materials look bright.

There would automatically be a saturation/brightness tradeoff which is entirely natural. It’s because reflectors don’t behave like emitters. You can’t usually get pure spectral colors from reflections, without, say, some cancellation trickery like in Pollia Condensata which, in many lighting situations, do look rather dark. Nowhere on the spectrum they reflect less than about 3% of light, but they peak at about 19% (As judged by eyeballing this chart)

AFAIK pigment based colors are usually broader. Just eyeballing (I know that’s hardly scientific) various spectra that come up when querying Google for “pigment reflectance spectra” suggests that most spectra have peaks across about 50nm (half width half maximum, or HWHM) with some being as low as ~25nm HWHM. Looks like the above-linked berries manage (for the most intense, brightest channel) around 10nm HWHM in two large peaks, so about “twice as saturated” as the best pigments (if saturation is taken to be the inverse of the width of the peaks of a given spectrum. I know that’s not quite accurate)

So ideally, what ever transform we get would be able to roughly deal with either case.


This probably very much isn’t the concern right now, but I think, if we actually get a spectral Cycles implementation, there might be a neat way to visualize a color more accurately to artists: Additionally to the normal RGB swatch, give a gradient which horizontally shows bounce depth (how will this color look if it has bounced n times) and vertically different temperatures of blackbody radiation. That way you could, basically at a single glance, judge how saturated a color truly is, across various lighting conditions. That should be enough to make clear that you’ll rarely, if ever, want 100% saturated colors for reflective materials (because stuff will be virtually black).

3 Likes

You make some great, thought provoking points here. All of my attempts to design RGB->wavelength transforms have had the goal of trying to mimic the reflectance curves of paints and pigments. My target application was painting-type programs and subtractive mixture. But I can see how the field of rendering has many additional needs, including the design of emitters. You’ve piqued my curiosity to find some way to modify the optimization process I use to favor less broadly distributed waveforms. Hmmm…

Some great observations here and it is refreshing to have a new perspective, thank you.

As a general note, I think everything you have said is valuable and should be considered. The goal of the RGB to spectrum in my workflow is to create usable spectra from RGB images. I have assumed, maybe incorrectly, that the most suitable spectra would be the ones with the least slope (least ‘spikes’ which lead to unpredictable behaviour under different illuminants) so that the colour would act predictably, even if a little boring - if that’s a thing - in any lighting.

What I have done so far which has worked reasonably well, is to use RGB synthesized spectra for most things, and then fall back to defining the spectrum manually for things which need it such as reflectances out of REC.709 gamut, or unique absorption profiles, etc. Getting accurate representations of things like gold is very easy if copying the real values, and will give more natural saturation after repeated bounces than any synthesized spectrum.

The berries you mentioned use structural coloration, which is another thing best left to being defined mathematically, driven by the wavelength, rather than attempting to synthesize the spectra from RGB. I believe this is what causes the narrow peaks in reflectance.

One concern I have about a parametric (gaussian distributions around 3 primary wavelengths) approach is that in order to match RGB 1,1,1 's perfect reflector, the reflectance values would peak at over 1, causing non-convergent materials which won’t render very well. This is because the spectrum would be more sparse than the spectrum which Scott Burns has created (obviously, since it sums to 1 across the entire spectrum).

Another issue is that if using an RGB image as an environment, you would likely want the illuminant’s white point to be D65, not E (which will look blue tinted in the final render). To do so, you have to multiply the environment spectrum by D65. This would produce the nicest response if the source white point maps to a perfect D65 spectrum, rather than a lumpy (from the parametric primaries) one.

In my mind, the best compromise between these would be to have a predictable conversion for implicit RGB to spectrum, and then allow users to define custom spectra, potentially with some assistance such as a gaussian distribution node which takes a peak value and a width, so that they could recreate the method Troy suggested if desired.

I also think your idea of a viewer which represents bounces and differing illuminants would be fantastic. Really, I think this fits into a larger picture change of allowing viewer nodes in the shader node editor. Not exactly sure how I feel this should work, but it would be nice. There are some other renderers (Thea) out there which show you your colour under different illuminants which is nice. Technically 100% saturated doesn’t really make much sense in the spectral domain, since pure red in REC.709 is still inside the spectral locus, meaning you create it by mixing more than one wavelength. But yes, a tool in order to visualise spectra would be a great addition once this is working.

1 Like

Yes, that’s what I meant by “cancellation trickery”. Interference effects cause these deep, rich colors. I guess you could also “just” model that kind of thing with a car-paint-type coating material. The berries definitely look much like some crazy deep blue car paint.

I do think broad spectra would be more important, at least initially. Perhaps there could even be different models for different purposes, with a “broad” variant used as much as possible.

Though if, in something like rec.2020, where the primaries are pure wavelengths, you go to the extremes, the only thing that’s even accurate would be to have narrower spectra. It’d be neat if the model were able to incorporate that. If the generated spectra are somehow truly representative of how extreme the saturation of the colors in question is. (All the while obeying energy conservation)

The most classic thing to optimize would be to go for something like maximum entropy. I don’t quite know how that might apply here, but in principle, that seems more natural than any kind of slope-based argument. And if you do it right, it just might be possible to incorporate some kind of peaking/diffusion parameter.

Maximum Entropy solutions typically are super smooth. In some sense they are the smoothest there could be, because being “jagged”, sharp, or varying wildly typically requires a lot of extra information, and MaxEnt minimizes how much information is presupposed.

I’m not sure if something like this could be done in an efficient manner, but perhaps you could figure out something along those lines?

Ok I call a truce then. I also apologize for the hasty phrasing on my behalf.

I was not super keen on Brien’s lookup based madness. That is, the essence of why yellow and blue paint turns green, as you know, matters more about the spectral spread than a particular duplicated shape. That is, what matters most for the creative simulation, assuming arbitrary primaries and gamut volumes, is the spectral width and composition. Under that lens, it struck me that we could forseeably come up with parametric solutions to design custom spectral distributions that would be “idealized” for particular gamut / primary combinations.

My math is but a knat compared to yours however, and as such, it is a bit of speculative fiction as I don’t possess the math power you do.

Having thought on it more, I believe the key trait is that our spectral primary compositions must sum up to a normalized Y of 1.0 for energy conservation.

I am hoping that someone with a larger brain might be able to arrive at a parametric and tunable solution. It would be a heck of a coup for both painting applications but also spectral upsampling of existing textures etc.

I think if the Math Wizard @Scott_Burns is able to arrive at a parametric series, the amount of gamut loss would be potentially minor. It increases the loss based on width, but a parametric control would make it feasible to reach out far.

Finally, the REC.2020 primary needs no upsampling, given it is indeed the precise sample. So the whole need becomes moot as we reach the spectral locus.

I don’t believe that is correct. You should convert to E before converting to spectral. The D65 would be handled at the tail end.

I think we might be thinking about different things. I agree that for reflectance and absorption (everything but emission) the white point should be E. As for lights, in order to get natural tones, the best distribution to use isn’t E. Each light should be able to be customised with its own spectra to be able to create things like fluorescent lights, but a nice default is to match the D65 spectrum, since that is 1) what we are used to seeing in daylight and 2) what the colours in most photos are assumed to be illuminated by.

The only way to make E look white without making the solution a bit hacky is to change the white point of the output space. I don’t know if that’s a good idea considering most things coming out of blender are untagged and would therefore look different outside of Blender.

This is a given. You model things to roughly where they should be.

Bit of a misnomer.

There seems to be a bit of a confusion between modelling the colour of a particular light and chromatic adaptation. D65, E, C, you name it will appear achromatic after adaptation.

This thread is turning into a circle jerk sadly, when it should try to stay course on focusing on Cycles being modelled into a spectral renderer. Upsampling plays a role here, and as a result, is an important thing to discuss. Modelling the colour of a light isn’t, given the chromaticity of the lamp is already dictated by the RGB triplet relative to the reference colour space.

Again, if you are speaking about upsampling the colour of the light, this is a moot point. If you are speaking about the adaptation achromatic white point in a target colour space, that too is a moot point.