Thoughts on making Cycles into a spectral renderer

All those things are to be handled by BSDFs, not shader nodes. The shader nodes produce BSDFs, and the BSDFs get the wavelength as input for evaluation and sampling.

I think i understand the reasoning but it is extremely limiting and puts a lot of burden on the devs for what seems to be a design choice. Maybe in the current system, it conceptually makes sense for wavelength to be completely hidden, but I think there’s just too many cases/applications which require it. It is quite straight forward to use, actually, so if it is simply a design decision and not something which will seriously break the optimisations, I urge we have it there just in case.

I’ve built blender and I’m having a look at Cycles source. Where should I start getting most familiar in order to be able to start making changes? There’s obviously a lot to change, but also a lot more that won’t be affected at all. I’ve been looking through the kernel but obviously I’m not instantly going to understand the years of work that has been put into Cycles overnight.

It’s a standard design in physically based renderers, required to make importance sampling work well. Other renderers do the same thing, there is a good reason for it. We will not deviate from this.

Mainly you need to look in the kernel.

  • path_state_init could be where you sample the random wavelength to use for the path.
  • For conversion from RGB you need to find all places that sample or evaluate a closure and convert the RGB value to spectral. bsdf_eval, bsdf_sample, emissive_simple_eval, bssrdf_eval, volume_shader_sample, …
  • For converting back from spectral to RGB, look at kernel_write_result where it accumulates all render passes in the global buffer.

Understood. It’s unfortunate if that’s how it has to be, but if that’s a hard requirement for MIS to work, I guess that’s what has to happen. I’m not familiar enough with MIS to know how exposing wavelength would break it, but I guess I will find out why when I try.

I’ll have a look in those areas. I doubt I’ll get anything workable yet but I’ll see what I can do. Thanks for that.

This is perfect! That makes a concentration parameter entirely unnecessary.

Here’s a question though: Could you perhaps optimize these things independently of RGB? On the full XYZ space? XYZ -> Spectral would be the first goal, and from there it’d just be a matter to apply usual XYZ -> RGB to get the right result, and it’d just go RGB -> XYZ -> Spectral to get everything as needed for what ever color space you might want.

Not sure if the horse-shoe outline poses any problem for that kind of idea. Clearly not all XYZ values would even be physically meaningful. But basically, you’d have to somehow work on this space

That’s right. I believe “the right way” to do this, in principle, would be to optimize a color over all possible spectral Metamers (under some reference lighting, perhaps Illuminant E) that produce it, finding the one with maximum entropy.
Though that is perhaps simpler said than done. I found some literature on this but as far as I could tell, it seems to always involve being trained on some data set of actual physical spectra, rather than theoretically considering all spectra that might produce a given color. And there might be a good reason for that.

My only concern with the approach used to create those spectra is this:
The spectrum is generated based on a particular REC.2020 colour, rather than being a weighted sum of three spectra generated for the REC.2020 primaries (I believe). This is unlikely to be suitable for rendering because this calculation happens millions to billions of times during a render. The nice thing about summing the spectra generated for the primaries is that it is very fast. If the approach used to get these spectra is fast enough, it is perfect, but I don’t know enough about this to be able to determine that.

The ultimate goal would be XYZ > spectrum conversion, and there are things that can be done on the Blender side of things to avoid getting invalid XYZ colours, or even colours too close to the border since these are likely to be problematic.

It looks like what can be done is to provide a somewhat sparse LUT which is then interpolated between. At least that’s what appears to be done here, with the purpose of rendering. - that page includes some useful supplementary material as well.
The relevant graphic is this.
The method is still only about half as fast as a more naive method, but I think given the vastly superior results, it is worth that overhead.

That is nice indeed. There would be a tradeoff between the resolution of the grid and the memory usage, but this seems like a really nice way to extend into once it is working.


Well, I have the spectral sampling working, now to update all the shaders to use the new wavelengths (float3, still not sure if this is the best way to go about this but it seems okay for now) property of PathState. I had a quick look around but couldn’t figure out 1) What files need modifying in what ways and 2) how to actually reference PathState from within the BSDF definition.

I’m also just using the regular rand() function to select wavelengths because I couldn’t figure out how to use the sobol generator.

@brecht any tips for updating shaders?

2 Likes

What do you mean by updating shaders exactly?

The first step I think is to make sure that everywhere we get RGB colors from closures (BSDF, BSSRDF, volume, emission), we convert them to spectral. For this bsdf_eval, bsdf_sample, emissive_simple_eval, bssrdf_eval and volume_shader_sample, are where you need to look.

Sorry, my terminology was off. Thanks for that, I’ll have a look into it.

Looking at it, to the untrained eye, it is pretty cryptic… I can see that bsdf_eval returns a float3 which I’m guessing is just RGB floats. It seems like I’m going to have to dive into each of the functions in the switch statement in order to get it to behave correctly?

I think all of the domain specific terminology is throwing me off a bit here, and I don’t yet see how to access the PathState from within bsdf_eval since it isn’t an argument.

You need to pass the PathState or the wavelength info as an argument to the functions that need it then, all the way to bsdf_eval and similar.

Most BSDFs do not need to be aware of wavelengths, just a few would need it I think. For most it’s a matter of converting the returned RGB value.

Thanks for that.

I’m having trouble understanding this. Is the returned RGB value of a closure just the value of the colour input on the node, or the resulting colour after simulating that node? Sorry I’m not quite sure how to say what I’m thinking.

The wavelengths which have hijacked the place of RGB in the sampling side of things would have to be taken into account for any calculation which has a colour involved, I think…

Apologies for all the questions, I’m sure you have other things to be doing.

A closure has an associated RGB weight. Additionally the BSDF can also return a color from evaluation or sampling (though most of them return a grayscale value). These are multiplied together to get the final resulting RGB color from evaluating or sampling the closure.

Some the result is often the color input of the node multiplied by some scalar, but not always.

Okay that’s what I needed to know. If that’s the case, each BSDF will need to know the wavelengths of the three channels so that it can determine its reflectivity for each one based on the spectral lookup from it’s associated RGB value.

What is that ‘associated RGB weight’ represented by in the diffuse BSDF? I still can’t quite figure that bit out.

It is sc->weight, as used in _shader_bsdf_multi_eval_branched and _shader_bsdf_multi_eval.

I’m having no progress with modifying the BSDFs unfortunately.

I’m not sure if I’ve misunderstood what the sc->weight means, since it doesn’t seem to represent the RGB triplet in the BSDF node. All I need to do is replace the RGB coming into the BSDF nodes with 3 new values based on the wavelengths being calculated, but I can’t see where I would do that other than in the shader OSL such as kernel/shaders/node_diffuse_bsdf.osl in which I don’t seem to be able to get the wavelengths.

bsdf_diffuse_sample doesn’t seem to utilise sc->weight at all, in fact.

Except you almost never want this, as you are nearly always dealing with a reference additive RGB gamut. Further, XYZ and RGB are virtually synonymous the moment you define the colour space. So essentially any decent upsampling approach is already dealing with XYZ.

Remember, the goal here is an upsampled spectral, which will always be coming from an input colourspace. It would be quite important to keep it based on the primaries of said space or else one risks wonky gamut hulls for a particular input.

The shape of the spectral is also up for debate, as I believe the need for a 100% reflectance is a naive one[1]. A parametric approach is important as one is never aware of the spectral composition goal, and as such, there isn’t always an optimal composition.

[1] I believe it is too easy to conflate a reflected value and an emissive need. Basing on primaries, the upsample would be based on emissive properties, and I believe the important facet is that which is important to any well designed RGB; a sum of Y to 1.0. From there, subtractive values should be able to be generated with little concern for the peak reflectivity, given no physically plausible materials have albedos of 1.0 etc. Of course we can make them, but I believe that particular facet is a confusion of protocol.