Thoughts on making Cycles into a spectral renderer

I think the design of how people will light the scene is relevant.

My observations have come from just trying it out and seeing what works. From looking at the results, it seems most natural to light scenes with D65. That being said, that spectrum shouldn’t be assumed or enforced.

I would like to extend my thoughts and say that I agree with everything that @troy_s just said regarding colour. From my perspective of working with this system and seeing the results I’ve gotten (this is obviously one of many perspectives, and definitely one with less experience than the others here) I have noticed that when using an environment texture, I expected white in the image to appear white in the final result without any adaptation, plus, I expected the spectra to be as close as possible to what exists in nature.

Until doing the Spectral>XYZ>RGB conversion, white points don’t come into the picture at all. Only once you’re in RGB does the concept of a white point make sense. I believe this is what Troy was referring to when he said that D65 would be handled at the tail end. I also think I understand that the reference to converting to E before converting to spectral would be referring to the fact that the implicit RGB>spectrum conversion of 1,1,1 should be the same as an E illuminant - 1 across the whole spectrum. I agree with this.

With this understanding, what I wanted to explain is that while a reflective material with RGB 1,1,1 should be perfectly reflective across the spectrum, I feel the most natural spectrum to use for a white light source is D65. This will more accurately represent the colours as they would be found in daylight than an E illuminant would. There’s no simple way to distinguish between whether a colour is being used as a light source or a absorbtion/reflection value (it is possible to use the same colour for both), therefore I thought that the emitter BSDF could be modified to include a “Base spectrum” dropdown, which would conveniently provide the user with many options depending on their needs. The colour could then be multiplied against that so that users can still use RGB environment maps, and retain a reasonably natural spectrum to light the scene with.

When converting to RGB, any chromatic adaptation can take place as a post processing stage. This is an unrelated issue to the light spectra as pointed out by Troy.

It’s not difficult to distinguish at the moment the conversion from RGB to spectral is needed. We have separate BSDF and emission closures.

Yes, that’s true. This is why I suggest the emission node (once/if this gets implemented) has a selectable property which allows the user to choose a base spectrum. This solves the issue of having to multiply the colour by a spectrum before putting it into an emission shader (which is the way I’m doing it with my current mockup/workflow).

This is more of a finer detail which isn’t important if the larger issues brought up earlier aren’t solved, regarding whether or not MIS will sill be usable if allowing the user to drive parameters with the wavelength. I think more research needs to be done into this, but unfortunately I’m 1. not familiar with Cycles code, 2. working on Windows and have heard building blender on windows is near impossible to set up, and 3. not familiar with how MIS works or how it is implemented in Cycles.

The shader nodes should output a spectrum, either converted from an RGB value or by using a BSDF or emission closures that can sample from a spectrum given a wavelength. There is no need for the specific wavelength(s) to be exposed to users in the shader nodes.

If we do this, just like other renderers, then MIS is not a problem.

It’s actually not:
https://wiki.blender.org/wiki/Building_Blender/Windows#Quick_Setup

I’m trying to understand the reasoning behind this statement. What do you mean by specific wavelengths? There’s no need to be able to blend between two BSDFs by a factor driven by wavelength, or to drive a parameter based on the wavelength? Such as making something more transparent to specific wavelengths or drive IOR based on wavelength?

I’ll check it out, thanks.

(edit, I was told a new user can only post a single image, so I’ve provided links to what used to be images)

A truce it is! Thanks for addressing that. Sorry again for being so snippy above.

Your talk of Rec. 2020 got me looking into that. I hadn’t heard of it before (perhaps not surprising since I don’t even know what “Blender” or “Cycles” is, so I’m really a fish out of water here). But once I saw the colorimetric specs on it, it was easy to modify my optimization for it. So I’m finding the least slope spectral curve with magnitude 0 to 1 that has a given Rec. 2020 linear rgb value. That’s all the “human” input that is going in.

For Rec. 2020 rgb = (1,1,1), I get a recovered spectrum of all 1’s, as expected:

For intermediate values, like rgb=(0.3,0.5,0.8) I get nice smooth curves:

Now this is where it gets pretty cool. Going for the extremely chromatic color, rgb=(0,0.15,0) I get

Even though the optimizer is trying to find a smooth curve, the only option it has is to spike at the G primary!

I tried larger chromatic values like (0,1,0), but the optimizer says no dice. Not gonna happen. Still wrapping my head around that.

The way I look at entropy is how much “human” there is in the algorithm, or as you say, how much information is presupposed. But your suggestion to “incorporate some kind of peaking/diffusion parameter” is just that very type of presupposition! I’m going to try other types of objective functions, like simply minimizing the area under the reflectance curve, and see what happens.

I’m worried that all my talk of spectral recovery is getting off-topic in this thread. Any suggestions on other places where this discussion would be more on-topic?

I’m more than happy to continue this discussion over on my BlenderArtists thread! It isn’t entirely on topic over there either, but since it is my thread I assume it is okay to have it there.

All those things are to be handled by BSDFs, not shader nodes. The shader nodes produce BSDFs, and the BSDFs get the wavelength as input for evaluation and sampling.

I think i understand the reasoning but it is extremely limiting and puts a lot of burden on the devs for what seems to be a design choice. Maybe in the current system, it conceptually makes sense for wavelength to be completely hidden, but I think there’s just too many cases/applications which require it. It is quite straight forward to use, actually, so if it is simply a design decision and not something which will seriously break the optimisations, I urge we have it there just in case.

I’ve built blender and I’m having a look at Cycles source. Where should I start getting most familiar in order to be able to start making changes? There’s obviously a lot to change, but also a lot more that won’t be affected at all. I’ve been looking through the kernel but obviously I’m not instantly going to understand the years of work that has been put into Cycles overnight.

It’s a standard design in physically based renderers, required to make importance sampling work well. Other renderers do the same thing, there is a good reason for it. We will not deviate from this.

Mainly you need to look in the kernel.

  • path_state_init could be where you sample the random wavelength to use for the path.
  • For conversion from RGB you need to find all places that sample or evaluate a closure and convert the RGB value to spectral. bsdf_eval, bsdf_sample, emissive_simple_eval, bssrdf_eval, volume_shader_sample, …
  • For converting back from spectral to RGB, look at kernel_write_result where it accumulates all render passes in the global buffer.

Understood. It’s unfortunate if that’s how it has to be, but if that’s a hard requirement for MIS to work, I guess that’s what has to happen. I’m not familiar enough with MIS to know how exposing wavelength would break it, but I guess I will find out why when I try.

I’ll have a look in those areas. I doubt I’ll get anything workable yet but I’ll see what I can do. Thanks for that.

This is perfect! That makes a concentration parameter entirely unnecessary.

Here’s a question though: Could you perhaps optimize these things independently of RGB? On the full XYZ space? XYZ → Spectral would be the first goal, and from there it’d just be a matter to apply usual XYZ → RGB to get the right result, and it’d just go RGB → XYZ → Spectral to get everything as needed for what ever color space you might want.

Not sure if the horse-shoe outline poses any problem for that kind of idea. Clearly not all XYZ values would even be physically meaningful. But basically, you’d have to somehow work on this space

That’s right. I believe “the right way” to do this, in principle, would be to optimize a color over all possible spectral Metamers (under some reference lighting, perhaps Illuminant E) that produce it, finding the one with maximum entropy.
Though that is perhaps simpler said than done. I found some literature on this but as far as I could tell, it seems to always involve being trained on some data set of actual physical spectra, rather than theoretically considering all spectra that might produce a given color. And there might be a good reason for that.

1 Like

My only concern with the approach used to create those spectra is this:
The spectrum is generated based on a particular REC.2020 colour, rather than being a weighted sum of three spectra generated for the REC.2020 primaries (I believe). This is unlikely to be suitable for rendering because this calculation happens millions to billions of times during a render. The nice thing about summing the spectra generated for the primaries is that it is very fast. If the approach used to get these spectra is fast enough, it is perfect, but I don’t know enough about this to be able to determine that.

The ultimate goal would be XYZ > spectrum conversion, and there are things that can be done on the Blender side of things to avoid getting invalid XYZ colours, or even colours too close to the border since these are likely to be problematic.

1 Like

It looks like what can be done is to provide a somewhat sparse LUT which is then interpolated between. At least that’s what appears to be done here, with the purpose of rendering. - that page includes some useful supplementary material as well.
The relevant graphic is this.
The method is still only about half as fast as a more naive method, but I think given the vastly superior results, it is worth that overhead.

That is nice indeed. There would be a tradeoff between the resolution of the grid and the memory usage, but this seems like a really nice way to extend into once it is working.


Well, I have the spectral sampling working, now to update all the shaders to use the new wavelengths (float3, still not sure if this is the best way to go about this but it seems okay for now) property of PathState. I had a quick look around but couldn’t figure out 1) What files need modifying in what ways and 2) how to actually reference PathState from within the BSDF definition.

I’m also just using the regular rand() function to select wavelengths because I couldn’t figure out how to use the sobol generator.

@brecht any tips for updating shaders?

2 Likes

What do you mean by updating shaders exactly?

The first step I think is to make sure that everywhere we get RGB colors from closures (BSDF, BSSRDF, volume, emission), we convert them to spectral. For this bsdf_eval, bsdf_sample, emissive_simple_eval, bssrdf_eval and volume_shader_sample, are where you need to look.

Sorry, my terminology was off. Thanks for that, I’ll have a look into it.

Looking at it, to the untrained eye, it is pretty cryptic… I can see that bsdf_eval returns a float3 which I’m guessing is just RGB floats. It seems like I’m going to have to dive into each of the functions in the switch statement in order to get it to behave correctly?