Thoughts on making Cycles into a spectral renderer

@troy_s my goal is not only to add spectral rendering, but to improve the quality of the code in there.

My first step is to make the existing system work with spectral sampling without changing anything about it for the user. I will not merge hard coded rec.709 conversions into master, because as you say, that’s outdated and will just make it even harder to remove. Users should be able to use input data in any colour space or define things spectrally, but that’s not going to happen for a while yet.

I will make sure the conversion from some triplet in some colour space to spectral data is done in an abstract way, such that there can be various concrete implementations for different use cases. For emission, a parametric approach seems really nice for example a Gaussian distribution around a given wavelength.

The reason mixing the three lights works well here is because it is really fast, and it has to be. My understanding of the Meng process is that it has to go through an optimisation process to determine a spectrum, which I can imagine is far too expensive to occur every time a colour is needed in the engine. If that understanding is not correct, I’ll have to take another look at it.

Right now, using the rec.709 primaries and mixing between them enables me to get something out of the engine for testing. It certainly isn’t the end goal and I won’t be merging it hardcoded like this.

1 Like

Meng works pretty fast, as it’s based on the solver speed.

With that said, I’m neither here nor there on the matter. I’m more concerned with the solve in question. Specifically, I’m a little concerned that the spectral primaries solved were against an array of ones. This means that while it seems the procedure works when you put a D65 light on the surface, I am now of the belief that it’s fundamentally wrong. Granted, I may be totally out to lunch here.

The reason I believe it’s wrong is that the primaries don’t appear to be useable as an emission. At least from what I can tell, if we were to output equal energies of the RGB spectra as shown, we’d end up with a flat spectra, and that means it would be emitting Illuminant E.

Correct me if I’m wrong here. I like being wrong.

1 Like

You are right in that an r=g=b emission would be an e illuminant in the current system, though this doesn’t have to be the case. My initial plan was actually to multiply whatever spectrum I got from the emission colour with D65 (or more generically, a standard illuminant for the white point of the current colour space) so that white materials appear white. There is the alternative of doing LMS scaling such that E illuminants become achromatic in the output, but I’m not so fond of this approach as it unnaturally weighs the emission spectrum.

Ideally users would just know what they want and I let them do it, but unfortunately I feel like for 99% of cases this will be wrong.

We can relatively easily use different models for generating emission spectra VS reflection ones, but I don’t know the answer when it comes to what is ‘right’. In my mind and based on previous tests, I can see a system where emission spectra are E illuminant when r=g=b working, just for consistencies sake. If people want daylight, they have to make it. Later down the track, it might make sense to expose a reference white option in emission spectra. I don’t have an answer :man_shrugging:

I think this is working, at least for reflection spectra. It is intuitive, fast and predictable. What is leading you to the thought that this is fundamentally wrong?
How you get that D65 light is the open question for me.

1 Like

That’s a great question that I hadn’t contemplated, @kram1032!

I re-did the computation with the hyperbolic tangent approach. Here is a plot of the sum of the three extremal RGB reflectance curves using the original method (blue curve) compared to the three curves for the hyperbolic tangent method (orange curve).

It is considerably better, but still not great.

(I can apparently only upload one image per post, so I’ll continue this discussion in the next post…)

1 Like

Doing the optimization so that the sum of the three has a reflectance <=1, it turns out it is almost identical to the previous result. Here is a superposition of both sets of curves.

BTW, the smoothest reflectance reconstruction methods have now worked their way through the publication pipeline. First, the three methods: (Burns SA. Numerical methods for smoothest reflectance reconstruction. Color Res Appl. 2020;45(1):8-21.) Also, an application of the second method to chromatic adaptation: (Burns SA. Chromatic adaptation transform by spectral reconstruction. Color Res Appl. 2019;44(5):682-693.)

3 Likes

It’s a challenging question. A few options:

  1. Render everything under the assumptions of equal energy in and out, aka Illuminant E. then adapt for display. I’m leaning towards this.
  2. Render everything under biased RGB.

In the case of the former, full wavelength inputs make sense, with a “white” light being Illuminant E. if there is a requirement for spectral, it is probably prudent to skip any non spectral RGB working space and migrate to a spectral working space such as BT.2020 or even wider, maximal area under CIE 1976. The rendering problems are averted as it is spectrally calculated, and the RGB primaries simply form the basis of calculating an upsample, at which point it would require a set of primaries that the largest region of the spectrum locus area I believe.

I am not sure 2. is appropriate the more and more I think about it. It seems to have inherent flaws.

Because the same primaries do not work for emission!

Imagine a rendered projector that is projecting D65 light. Directly at the screen or hitting Spectralon. Using a R=G=B interface should result in D65 for our imaginary projector. Yet it won’t. So we are relying on the surface to adapt the proper white, which strikes me as a strange choice.

It would seem the most logical thing is to use Illuminant E as the base, and work out the chromatic adaptation as part of the display rendering transform. That means that the proper solve target wouldn’t be Illuminant E, but rather solve for D65 post display rendering transform. As in it feels like the approach to the solve is 85% there, but the target chromaticity for each of the RGB. EG: the target chromaticity for BT.709 red should not be D65 BT.709 red, but rather BT.709 red at D65 transformed to Illuminant E.

Working backwards, this would mean that our same projector example, projecting pure red at the camera, would in fact be projecting Illuminant E BT.709, but the display rendering transform would transform the result to D65, and the final chromaticity at the display would be D65 BT.709 red.

I like this approach since it is simple and works nicely in the default case. If someone really cares about the spectral distributions, we should enable emission spectra with a different white point, and then the user could also adjust the chromatic adaptation which occurs post render.

I’m going back to square one and trying to tackle just emission, seeing if I can get that working solidly since it is the part which I understand the least in the code base. Once I have that, adding colours back in to other materials should be relatively straightforward.

Just wanted to say as someone working at a studio using a spectral renderer and at the same time knowing close nothing about renderers and not much more about coding, it’s really interesting to read this topic.

Thanks for keeping this interesting conversation rolling, and huge thanks for all your efforts Chris!

4 Likes

Glad to hear it is interesting. Out of interest, what renderer do you use at work and what is the user experience like? Do you always use RGB colour or are you specifying materials based on wavelength? @dan2

1 Like

Using Manuka in the assets realm so my insights are pretty limited (pretty much as a user for QC only, just really interested in reading about development of any kind). I can check with the devs though, to see what’s public information and what’s not.

3 Likes

I’ve read a few papers about the research which went into Manuka and it’s colour pipeline, any information or thoughts you could share about it would be hugely appreciated.

2 Likes

I can check with the more approachable developers and see where it goes. Weta is on shutdown till 5th of Jan, but I’ll check right after. Would be really cool to see something like this making it into Blender.

2 Likes

That sounds like a good plan. Would it, perhaps, be as simple as multiplying an illuminant E spectrum with a blackbody (or what ever white point you choose) spectrum (and probably renormalizing to have the same area)? - It’d certainly work for pure white but is that enough in general? If that’s the case, then adapting the current reflectance spectral colors to be (kinda) usable as emission spectra should be fairly easy to do. Just get the blackbody node set up to work with spectra in general, multiply with that, and you are done. (Or alternatively, reimplement the formula for blackbody radiation. Should be easy enough)
In fact, if that’s all it takes, that could be a very reasonable future workflow for lights: Literally just give the blackbody node an additional color input. If it’s set to white, it emits blackbody radiation. Otherwise said radiation is multiplied by the (spectrum of the) color. And perhaps one could even do really silly things like, want an actually gold light? Plug in as “color” (which internally is treated as spectrum) the reflectance spectrum of gold, lol.

In the end, maybe there could be something like a “Principal shader” for lights, which effectively is just spectral data weighted by the chosen color and white point in all the relevant ways.

I also thought of another very minor issue with using spectra meant for reflectance as emission spectra. I think intuitively the color’s lightness should translate into overall emission energy of that lightness. Like, the area under the spectrum should probably be the lightness of the color (which, I think, is just the maximum channel?) - Just so the right amount of energy is generated.
Maybe that’s complete nonsense. I’m not sure. But without this, currently, a “white” (1,1,1) light will have three times the energy of a red (1,0,0) light with the same energy.

For reflectance, that’s certainly both expected and, really, intended behavior. For emission I think it might not be.

@Scott_Burns Thanks for the update! Can you share the CSV version again, please? :slight_smile:

1 Like

These values correspond to wavelengths from 380 nm to 730 nm in 10 nm intervals:

http://scottburns.us/wp-content/uploads/2018/09/RGB-components-comma-separated.txt

I’ve also updated the webpage to include these new developments.

3 Likes

This is how I would expect it to behave, just as in current RGB lights, though if we did split lights into emitted power and chromaticity, we could allow for equal energy lights of different colours, but unless we are accounting for the spectral response of the film they still won’t be equal luminance. I believe Manuka has tooling to allow for intuitive control of colour independent of luminance, but I don’t know if that’s the right move for Blender at this stage.

1 Like

Seems like a good, scalable approach.

1 Like

The great thing about the RGB upsampling that is being used here is that it is true implementation of the ‘three light model’. It is just three spectra added together. If the current light colour workflow is intuitive, then the first version of spectral lights will be just as intuitive, since nothing will change. Later down the track we can add more fancy controls.

Ideally I would love if users were able to define materials as completely wavelength dependent but there are technical issues with that which make it difficult to implement efficiently.

1 Like

I believe that intuition is false. That is, it requires a clear line between radiometric energy and perceptual energy. There is a relationship between perceptual energy, known as luminance, and physical energy known as radiance.

The ratio of energy here is part of the issue with RGB rendering; it is quite a hacked short cut. That is, the “energy” contained in the reddish, greenish, or blueish channel of RGB is not terribly accurate with regards to the underlying spectral energies used to composite them. It sort of work, but it is quite a hack as the various demonstrations have shown.

I think the challenge is separating the chromaticity of the light from the display bound chromaticity of the light.

That likely means PBR surfaces need to be “neutral” design, which seems to imply being constructed under Illuminant E assumptions, and then transformed to the destination display context.

The final transform to display would then shift the output white and primaries accordingly. The question here is how to derive the proper primaries such that at the display they correspond to the specified primaries.

Specifically, none of this is terribly interesting beyond the specific case of using three solved primaries to maintain a synthetic RGB model in a spectral system for the folks designing the materials. I don’t think it makes much sense using a small gamut here though, as it ends up constraining the materials you are designing to that gamut, and they end up not terribly future friendly.

Using a wider gamut to design them would probably lead down the luminance rabbit hole of poor rendering again, even if taken to spectral.

There doesn’t seem like much option beyond upsampling on a case by case basis to be honest.

You folks have some of the most bleeding edge work being done here, along with some fantastic colour peeps including Mansencal. This paper has some of the most recent work there from none other than Hanika.

There certainly doesn’t seem to be one silver bullet here. Though if we had colour management throughout we could get somewhere much closer to an ideal solution.

By having a difficult-to-break even if limited default of rec.709, then allowing for more flexibility and responsibility allowing colours to be defined in larger spaces, which might require alternate approaches for spectrum synthesis, we almost get the best of both worlds, but it is definitely a distant future sort of goal.

1 Like