Thoughts on making Cycles into a spectral renderer

@brecht Could you give some hints on this one? I’m trying to step through kernel_path_integrate and figure out what the different parts do, right now I’m stuck on light/lamp primitive colour, can’t figure out where their colour is calculated.

My idea is that I’ll need to somehow convert the float3 weights in shader_emissive_eval to wavelength intensities, but I don’t understand what each of the parts represent.

Here’s my first attempt:

ccl_device float3 shader_emissive_eval(ShaderData *sd, float3 wavelengths)
  if (sd->flag & SD_EMISSION) {
    float3 lightWeights = emissive_simple_eval(sd->Ng, sd->I) * sd->closure_emission_background;
    return rec709_to_wavelength_intensities(lightWeights, wavelengths);
  else {
    return make_float3(0.0f, 0.0f, 0.0f);

EDIT: This attempt is wrong, didn’t seem to do anything. The thought crossed my mind that converting to wavelengths twice would essentially re-scramble the contributions. I’ll need a more solid understanding of the shading code in order to figure out where I’ve gone wrong.

I’d like to push a branch but I assume push rights are reserved for BI devs only, what is a suitable way to have my branch accessible @brecht?

1 Like

Although direct lighting from both mesh lights and lamps is broken still (getting more or less colour based on the roughness value so might have something to do with MIS), we’re starting to see some elements of spectral sampling here, notably in the brightness on the blue ball.

I’ve done a direct comparison here, no settings changed on the file. The blue ball is a default principled shader with a colour of rgb(0, 0, 1) illuminated by rgb(32, 0, 0) emission mesh light, and a white mesh light for reference which is rgb(4, 4, 4).

Here’s the same saturated light and principled material in master. As you can see, the red light is not providing any light to the saturated blue diffuse material, even with an emission strength of 32.


For now, ignore the fact that the lights seem to have the wrong colour in most of the image, it is just an unresolved bug.


What would the spectra here currently look like? To get a (presumably by sRGB standards) pure blue sphere to reflect a pure red light, there must be overlap in their spectra, so you probably aren’t doing the most naive thing imaginable. Did you end up going with that fancy paper that was brought up way earlier in the thread or are you doing something else?

That being said, seems to me that the red light right now acts on diffuse materials as if it was white, as per the bug you mentioned. If that’s what’s happening, the result would eventually actually look darker than this, right?

(Of course, the actual used spectra are independent of getting wavelengths to be tracked correctly at all. They’ll just eventually define the look of Cycles for materials that don’t use explicit spectral data)

why isn’t the ground plane affected by the strong red light?? Or is it?

1 Like

The image using spectral sampling still has multiple issues with it, the most obvious one being that the spectral data is not being used in many light paths still, which is causing the contributions of light to be treated as ‘white’. This is making the effect much more pronounced than it should be. You should really only start seeing a tiny amount of illumination with a bright light source in this scenario.

Once I’ve fixed up the colour conversion issues I’ll do another comparison. Red/green likely has much more overlap so I’ll try that combination too.

I’m using spectral rec.709 primaries provided by @Scott_Burns in message 39. They’re an excellent choice for this application right now because every colour in the system is treated as rec.709 and those curves take advantage of that fact to be as smooth as possible.

I believe you can achieve the same outcome with slightly more saturated primaries but once you go too far out you have to break one of the constraints:
Bounded in the range 0-1, r+g+b=1 across the spectrum, and smooth spectra (no sudden jumps or drops in intensity in any channel)

1 Like

It is affected, just spectral light isn’t being handled properly in that light path yet.

Ah, I see, and since the thus derived red spectrum gives a tiny amount of contribution on the blue end of it, even a perfectly blue (by rec. 709 standards) sphere is gonna reflect a tiny amount of red (by rec.709 standards) light. Makes sense. Looks like the contribution really is quite tiny though.

@Scott_Burns I just saw you have since developed this technique further. If you were to do the same technique of linearly mixing three extremal RGB values with the newer hyperbolic tangent method, is the resulting mixture spectrum any better? Obviously they would ideally add up to a flat spectrum of perfect white which the current set doesn’t do

EDIT: Never mind, they totally do. That was the whole point you made in that method. The question does remain though: If you use the hyperbolic tangent version rather than the logarithmic version, is the end result appreciably different?

Yes, it should be very low influence when using red and blue. Red and green, and green and blue will have significantly more overlap.

No. That’s an equal energy scenario, which is of a specific chromaticity. Note that “white” always means a specific chromaticity.

Why not just start with spectral? This fast-path logic is the exact same thing that has landed Blender in this mess of sRGB / BT.709 crap. Just skip all of the crap and do proper arbitrary spectral upsampling otherwise the awesome work is going to saddle the software with a glaring problem for gosh knows how long. Just use Meng et al. as a starting point given its simplicity.

Stop assuming sRGB / BT.709. Everywhere. Everyone.

Maybe I’m not understanding what you mean, but you need some convenient starting point, right? Most people aren’t gonna fiddle with spectral data. They just want something that works. And so you need to assume some sort of spectral conversion for materials. Of course there could easily be material libraries with real world spectra but you don’t always want to reproduce a material that actually really exists.

Also, this should be pretty easy to swap out once the actual spectral sampling works correctly. This is just a question of the user interface. How to convert arbitrary colors as specified by artists into spectra which Cycles would then use.

We are talking about a reflectance spectrum here. Intuitively, a material that reflects (perfectly) “white” should reflect 100% of the visible spectrum. So I think that assumption is a fair one to make.
What actually appears white is a different story and has, of course, to do with white points and shading and what not.

I don’t disagree. The problem is that developers are lost in most of this stuff, so you end up with what is convenient, like hacking on assumed sRGB into Grease Pencil, and completely cripple all of that coding work where the problem could have been avoided right out of the gate. Migrating to spectral should be lensed the same way; don’t underestimate how long crippled code will remain.

You realize Blender is littered with sRGB transfer functions and brand new code is hacked on to this day with sRGB / BT.709 assumptions? I’d agree with the idea, but sadly the reality is not the case. Worse, it becomes this monolithic gravity well that defies change thanks to other parts all trying to interact. It’s a huge problem, and the problem would not be easier to fix for spectral than it is now with RGB.

Wait a minute… can’t you specify an emission in Blender? Hrm…

1 Like

Even so, gotta start somewhere. Once Cycles renders in spectra, the rest of this can be tackled one by one.

Well of course you can. And for that, this current approach really isn’t ideal. You’d definitely want to upgrade the already existent stuff - wavelength and blackbody emission - to properly use spectral rendering for one. Then you can, in principle, also provide a library for common light sources such as standard illuminants (the ones that aren’t just blackbody radiation, especially E), common LED spectra, florescent bulbs, neon tube spectra, an upgraded spectral based sky model, etc.
But for now, to see how this spectral stuff might work to begin with, just doing the, yes, hack of reusing the color conversion meant for reflectance spectra is a start.

My point is: Blender is a broken mess of pixel management now, and it was built exactly as you are suggesting. It doesn’t work unless it starts out working with an eye on how long the code will last. If Blender is any indication, the answer is “Well past the best before date.” :slight_smile:

This is the crux of the problem. If you use fixed spectrals as primaries, then they need to behave properly as emission primaries, not reflected. Technically the existing curves will always bounce back a certain amount of radiometric light, which I think makes them very much unsuitable as emission spectra.

It’s something to ponder, and why a dynamic upsampling is likely a better approach here, in addition to addressing the issues I outlined above regarding legacy code.

There is exactly one way to solve it all at once, and that’s to start from scratch. Short of what amounts to another dev sprint laser focused on this issue, if before “the best before date” is the hope, that’s just not in the cards. To even begin to make Blender reap all the benefits of spectral rendering, first it needs rendering engines with spectral rendering. This thread, as far as I can tell, is about that first step.

Once that works (which it doesn’t quite yet), the next steps can follow. For example:

  • Proper in-Cycles support for spectral materials. Upgrade existing nodes where the necessary changes are obvious and relatively easy (wavelength, blackbody, maybe sky texture - that one might be a bit more involved). Add a material library (I suspect that part wouldn’t be that hard. Just a list of what amounts to LUTs of spectral emission or reflectance data or complex IoR data or what ever. Would be easy to expand in the future too) - This much still seems in scope of the mission statement “Making Cycles into a spectral renderer”. It’s gonna be step 2 though.

  • Fancy new materials such as thin films or layered materials, as well as completely custom spectral data nodes. - These seem advanced and represent future directions

  • Custom arbitrary or preset-based n-channel rendering output. - A basic limited form of this (rendering to some preset three channel color space of your choosing) is probably not that hard once spectral rendering as such works. Really, that’s quite close to what’s already happening, right?. Just a tad more flexible. Totally arbitrary forms with extra channels are, if I understand right, more involved in particular due to GPU rendering implementation details. (There is, is my understanding, no such issue on the CPU side so long as memory doesn’t explode)

  • Proper color management throughout the entire pipeline with all the bells and whistles, including making correct use of the afore-mentioned possibly arbitrary n-channel data. - This is clearly your main wheelhouse and spectral rendering is gonna be pivotal for that but, while it certainly is important, solving that whole can of worms isn’t actually, as far as I can tell, the direct goal of this thread.

Each of these things is a piece of the puzzle and likely requires different devs with different sets of expertise.

Of course it’s important to keep in mind the full picture, but trying to actually do it all at once is just gonna be intimidating and make the required actions even less likely to happen anytime soon. Unless there is an active push from the dev team as a whole to go that way. Which, at this point, seems unlikely. They are preoccupied with getting Blender 2.8x low on bugs and adding features to other areas of Blender, both of which is totally fair and also important.

I don’t care about the blueprint for integration to be honest. Baby steps are totally fine.

Baby steps with godforsaken BT.709 based primaries isn’t. Full stop.

It’s garbage and exactly where we are now with Blender; hard coded rubbish everywhere and a bunch of developers randomly mashing their keyboards and cramming StackExchange numbers into the code base.

Nope. This is the very first thing required.

A DCC makes pixels. Make sure the pixels are the intended pixels. Blender is busted up because no one has realized that pixel management rests at the very foundational level. It’s literally the most important thing in a thing designed to output pixels.


@troy_s my goal is not only to add spectral rendering, but to improve the quality of the code in there.

My first step is to make the existing system work with spectral sampling without changing anything about it for the user. I will not merge hard coded rec.709 conversions into master, because as you say, that’s outdated and will just make it even harder to remove. Users should be able to use input data in any colour space or define things spectrally, but that’s not going to happen for a while yet.

I will make sure the conversion from some triplet in some colour space to spectral data is done in an abstract way, such that there can be various concrete implementations for different use cases. For emission, a parametric approach seems really nice for example a Gaussian distribution around a given wavelength.

The reason mixing the three lights works well here is because it is really fast, and it has to be. My understanding of the Meng process is that it has to go through an optimisation process to determine a spectrum, which I can imagine is far too expensive to occur every time a colour is needed in the engine. If that understanding is not correct, I’ll have to take another look at it.

Right now, using the rec.709 primaries and mixing between them enables me to get something out of the engine for testing. It certainly isn’t the end goal and I won’t be merging it hardcoded like this.

1 Like

Meng works pretty fast, as it’s based on the solver speed.

With that said, I’m neither here nor there on the matter. I’m more concerned with the solve in question. Specifically, I’m a little concerned that the spectral primaries solved were against an array of ones. This means that while it seems the procedure works when you put a D65 light on the surface, I am now of the belief that it’s fundamentally wrong. Granted, I may be totally out to lunch here.

The reason I believe it’s wrong is that the primaries don’t appear to be useable as an emission. At least from what I can tell, if we were to output equal energies of the RGB spectra as shown, we’d end up with a flat spectra, and that means it would be emitting Illuminant E.

Correct me if I’m wrong here. I like being wrong.

1 Like

You are right in that an r=g=b emission would be an e illuminant in the current system, though this doesn’t have to be the case. My initial plan was actually to multiply whatever spectrum I got from the emission colour with D65 (or more generically, a standard illuminant for the white point of the current colour space) so that white materials appear white. There is the alternative of doing LMS scaling such that E illuminants become achromatic in the output, but I’m not so fond of this approach as it unnaturally weighs the emission spectrum.

Ideally users would just know what they want and I let them do it, but unfortunately I feel like for 99% of cases this will be wrong.

We can relatively easily use different models for generating emission spectra VS reflection ones, but I don’t know the answer when it comes to what is ‘right’. In my mind and based on previous tests, I can see a system where emission spectra are E illuminant when r=g=b working, just for consistencies sake. If people want daylight, they have to make it. Later down the track, it might make sense to expose a reference white option in emission spectra. I don’t have an answer :man_shrugging:

I think this is working, at least for reflection spectra. It is intuitive, fast and predictable. What is leading you to the thought that this is fundamentally wrong?
How you get that D65 light is the open question for me.

1 Like