Thoughts on making Cycles into a spectral renderer

In the viewport I don’t get it apparently, but also, I got another memory error:

But rendering the final result gives me this

Hallo everybody,
I have stumbled across this forum/thread when looking for ways to simulate the appereance of dyed plastic foils. Reading through the thread, I find it amazing to see the development of this renderer!

I’d like to consult you about the application of spectral Blender for the simulation of laminated, dyed plastic films (dyes evenly distributed). Imagine a stack of two layers (some red and some blue dye beeing used) for which both the transmittance spectra and film thickness are known.

  • How is a read-in spectrum curve (suceeded to do this and assign it to volume absorption nodes) related to the thickness of the assigned volume (layer thickness) by spectral Blender?
    Do I need to read-in the spectra of 10 micro meter thick films and build them with the same thickness in spectral Blender? Does changing the thickness of films in spectral Blender change the absorption of the modified layers by Lambert-Beer’s law?

Thanks a lot for your help!

1 Like

Hi @chrw, thanks for the comment.

This is a perfect application of spectral rendering - great to see people working on projects like this.

There are two approaches here:

  1. Set up a volume absorption spectrum according to your dye and measure the transmittance through a solid object
  2. Create a plane which contains a transparent BSDF which you assign a calculated transmittance spectrum to, accounting for the depth in the calculation of the total transmittance.

As you’ve pointed out, volume absorption depends on the thickness of the volume, I believe it transmits the specified colour after 1BU of volume when the density is 1.

If you aren’t looking at spatially varying effects such as different transmittance due to ray length at grazing angles, I think the second approach will be simpler and render more quickly. You simply take your transmittance spectrum (which is calculated based on some reference depth) to a power equal to the multiple of the reference depth which you wish to calculate. For example, if your transmittance spectra are specified for transmittance through 10 micrometers of the material, and you want to view how it would look through 50 micrometers of the same material, you simply raise the transmittance spectrum to the 5th power.

As for how to get a tabulated spectrum into the material editor, the tedious way is to create a spectrum curve node and manually specify the points, but I believe @pembem22 might have previously built a script to load in files of a particular format as a spectrum curve node. Will need to wait for him to clarify that.

I hope that helps, and please post your results here, I’m curious to see what you find!


Thanks a lot for the input, I’m going to try things out later today. It will take me some time - discovered Blender only a few days ago.

I have sucessfully used the plugin prepared by @pembem22 reading *.csv spectra. Could it be an option to integrate an input field therein for the film thickness in some metric unit? Maybe it could also be specified in the first line of the *.csv file.

Is there a way to use your plugin via the script window (load *.csv spectra and assign it to nodes via Python)?

Hi, I modified the script to support the usage as an operator. Change the extension of the file to .py and replace it in the 2.93\scripts\addons directory.
import_spectrum_csv.txt (3.5 KB)

Here’s an example of usage:
bpy.ops.node.import_spectrum_csv({}, 'EXEC_DEFAULT', False, filepath="C:\\path\\to\\file.csv", ignore_top_rows=2, spectrum_column=3)

It’ll add a new node to the active material of the active object. You can change both programmatically. The node will appear as the last one in the list of nodes of that material:

As far as I understand, the first three arguments are necessary for every call of any operator, but I’m not sure what exactly they mean.

You can also see all available parameters by pressing TAB when you start entering arguments:

You can write a custom script to import additional information from *.csv files and add the “Value” nodes with corresponding values to the node tree, if that’s what you mean.

Just saw a news in the render meeting notes and I think I may mention it here

It seems Cycles X will be merged to master around September 20, and it is also mentioned that this is probably when " the 3.0 like merge window is likely to close." Don’t know how Spectral Cycles X is doing but if we are still not ready I guess we can wait for 3.01 3.1 or something (not sure about the version naming in 3.x series). (Just checked the previous blog post, next version is 3.1)

1 Like

Any news here? This came so far, I hope it hasn’t just died now…

1 Like

Hi @kram1032, definitely not dead. There hasn’t been many updates since we have all been busy on other things (for myself, my day job has demanded a lot more of me lately) but the hope is to get a spectral ‘core’ merged after we iron out some issues with colours not matching where they should, and after we ensure feature parity with regular Cycles X. A new spectral upsampling method which @pembem22 has worked on recently seems hopeful but has posed some challenges.

I’ll spend some time with the existing sRGB primaries upsampling method to see where the issues are - if we can solve them, I think it would be worth merging with that method, then looking to replace it with the more advanced method later.


Hi, @smilebags is right, it’s not dead. I’ve been working on a new spectral reconstruction method based on the implementation from the colour library. It’s not perfect yet though as there are artifacts in certain cases. Also, there have been lots of improvements to Cycles lately, but they cause merge conflicts that take some time to resolve and may accidentally introduce new bugs.

So the main objective right now is to fix bugs and make spectral rendering compatible with all Cycles features.


A question, when I am digging the past posts on this thread, I have come across things about @troy_s talking about a Spectral version of Filmic with proper gamut mapping for solving the out of gamut blue turning purple problem. And he once said this

This was a post from last year. I vaguely remember back then Filmic was not present in the Spectral branch at the time (I hope my memory is not wrong), so I am sort of confused about it’s current status, is Spectral Filmic with proper gamut mapping still WIP or is the Filmic currently in Spectral Cycles X already the Spectral Filmic? If it’s WIP do we have any hope that it might make it along side with the initial “core merge” that smilebags mentioned? If the current Filmic is already it, why do I still see the blue to purple effect? Does it has to do with the Spectral Reconstruction as well?

This is quite a complex topic. Simply put, the Filmic in the spectral branch is still the regular Filmic, and because we are feeding it input it wasn’t designed for (wide gamut) it will show some issues that are usually non existent in regular Filmic. It is still significantly better to use Filmic than not.


The issue is that as indirect bounces accumulate, instead of moving “out” toward the primaries, the mixtures move “out” toward the spectral sample point locus. This means that the mixtures become problematic immediately, and the compression has to cover the entire locus.

This is a very challenging and yet unsolved problem. Spectral path tracing has made the issue of gamut mapping the footprint, that amounts to the entire spectral locus, a larger dilemma. And gamut mapping the “up and down” volume is challenging enough!


This post might seem a bit abrupt but I just said this in another thread

and I wanted to include a set of images but second thought this is not really related to that thread so I am posting them here.
This is what we currently get from the Spectral branch with blue light on blue cube:

And here is what TCAMv2 does:

Wow, it just looks amazing. It still looks a bit purple but it is so subtle now. Just sadly it’s not open source.


Just looked up this thread… Also had a little bit discussion with @troy_s a bit earlier and got a bit more confused with the whole color thing :sweat_smile: . It’s nice to see the progress and everything. I’ll have to test it myself. It’s not quite obvious to see what the current plan is, I’ll take it as it being “not working with all cycles x features but a lot is already working”.

Why filmic gets involved as well? It’s supposed to be a post process color mapping? how is it related to spectral rendering?

Hi @ChengduLittleA, great to hear you’re interested!

Filmic is relevant here because it is very easy to create colours outside of sRGB in spectral rendering engines. We need a way to take those ‘wide-gamut’ colours, and bring them into the destination colour space (usually sRGB) without impacting the look of the image too much. Standard Filmic was not designed to handle colours outside of sRGB, so putting such colours through it results in less-than-ideal results.

Already, the results of spectral cycles can be exported as EXR and hand-processed, but we need a better default for displaying the results.


So there are two kinds of rendering involved, first one being the rendering of Cycles or Eevee etc., this is the rendering from 3D scene data to Open Domain/Scene Referred light data. In RGB renderers, the light data is tristimulus data, while in spectral rendering, the light data is real light data with wavelengths etc.

Note that there is no a single monitor that can directly display the result of the first kind of rendering, in RGB rendering in Rec. 709/sRGB gamut, the problem is only about intensity, As I have said in another thread, you can have Nishita sky texture using real world’s sun strength but your monitor can never emit the same power as the sun.

The second kind of rendering is the “View Transforms” like Filmic, which take Open Domain data and render them into an image your monitor can display.

However, with spectral rendering, there is the second problem besides intensity. It is gamut. A quote from Troy in this thread not long ago:

The gamut of spectral rendering, because of its nature of dealing with wavelengths instead of RGB Rec.709 gamut, the Open Domain result from the first kind of rendering would have the gamut of the entire visible spectrum.

The problem now, if I understand correctly, is that most gamut mapping approaches only deal with mapping from a certain colorspace to another, but not from the entire visible light range to Rec.709/sRGB gamut. As you can see, although TCAMv2 is much better in dealing with wide gamut in my previous post, the out-of-display-gamut blue is actually still skewing to purple. And that is actually currently the best option we have.

I believe this quote from Mark summerizes the whole thing:

1 Like

It might seem like pedantry, but it’s worth noting that nothing is “real” here. It’s just another model that marches closer to some other ideas of how such a model should behave.

It might seem logical to suggest that if we render in the “same primaries” as the display, it’s not quite as simple as solely being about the “intensities”. That could be a sane starting point of viewing the problem, however the idea of “appearance” of the tristimulus values is very tricky here. For example, as we increase the “strength” of a tristimulus position, the “colourfulness” must increase in a correlated manner. This can be challenging to contain to strictly “intensity” in this example!

And if we get too hung up on appearance, we forget about the medium and how the medium expresses content. There is no solution without working backwards from the medium of expression in my mind.

They are both the same really! Consider transforms as being nothing more than some series of transformative processing that tries to convert tristimulus, or spectral, into an image.

More explicitly: Electromagnetic radiation, or any model loosely similar to it, is not an image. Further, an image is not merely a simplistic transformation of stimulus. That stuff in the EXR? It’s tristimulus data, not an image!

It’s arguably deeper than this too! TCAM v2 makes some really sane design choices in the pragmatic sense; given that appearance modelling is still subject to all sorts of complexities and unsolved facets, and given that the output can vary extensively across different mediums, TCAM v2 attempts to focus on “chromaticity linear” output. This is sane, and “better” given the contexts listed. However, being sane given some constraints means that there is still much work to be done on the front of image formation.

It is simply unsolved to greater or lesser solution degrees. TCAM V2 is a sane design solution give the constraints.

From my vantage, specific facets of the human visual system must be prioritized when considering image formation. Specifically, the rabbit hole I’ve been chasing now for far too long is notions of “brightness”, which trace all the way back to black and white film. It’s a tricky as hell surface, hence why there’s no direct solution just yet. Hopefully sooner rather than later. The idea of “brightness” has dimensions that support ideas of “colourfulness” and as such, are critically important.

Layering on wider gamuts and other nonsense isn’t helping things, at all, as we can easily see that if we take an open domain tristimulus render using the exact primaries of the display. Even this simple example remains unsolved in any satisfactory manner. Anyone who professes to show a “solution” to this basic problem is rather easy to refute. So if we can’t solve BT.709 rendering in any reliable manner, what’s the betting we are knowledgable to solve wider ranges of chromatic content? Close to zero.


I’m even more confused…

Maybe it’s me just needing a visually “not obviously weird” result. E.g. I just want a light to behave just a little bit closer to spectral mixing, I don’t even care what kind of mapping or primaries or spectrum sampling method is used, the only thing I want to achieve is a yellow light on a green object won’t result in a greyish tint.

RGB mixing is fast, but that is only remotely physically meaningful if, I mean only [ if ] there’s just that three specific wavelength in the scene (and your display isn’t using laser leds either). So any method that have multiple wavelength in-between is much closer to real life lighting experience. So from the look of this project, the algorithm is principally fine in this aspect.

The point on exposure(?)/sun light/sky thing is a valid point as well, but be it an artistic tool, you simply adjust everything till it looks nice enough, isn’t that the point of using subjective(?) stimulus as the way of thinking?

Mapping after rendering like filmic stuff isn’t gonna solve any of the problem, in the case of this “greenish tint” problem, it will make all the same greyish color into that greenish color, not hard to understand that being a problem, no matter how wide the “gamunt” is.

I’ve been following @troy_s for a while now. They have quite some in-depth research on this aspect. I don’t think I need to dig that deep to get a visually satisfactory result. I’ll keep follow along the project :smiley:

1 Like

The challenge can be seen clearly here, if we focus on the seemingly simple statement.

You are describing a colour. That means that you are describing the sensation of a colour. This is sometimes referred to as the “perceptual” facet of colour. This facet is nowhere in the tristimulus data.

That is the crux of the issue.

The actual problem here is that we are all bound by the current medium we are looking at.

If we have data that represents spectral stimulus, the meaning of that stimulus cannot be expressed at the display.

This leads to two further questions:

  1. What should we see in a formed image at the tail end of a medium?
  2. What does data that represents spectral stimulus mean to a medium?

That becomes a much larger issue when we consider that the idea of larger gamuts would hope to hold some idea of a consistency of image formation across different mediums. The image shouldn’t “appear different”^1 across different mediums. Note that tricky word “appear” again.

The dumb data is just dumb data; it doesn’t give us any hint as to how to form an image in a medium. This is the depth of engineering that was lost when electronic sensors and digital mediums entered the mix.

The larger point I would stress is that even if we had a massive Uber Theatre 2000 with 100 emitters per pixel, and a dynamic range that is infinite, the goal is not simple simulation of the stimulus data. It never has been!

This is vastly more challenging that it seems, and doubly so when we tackle the consistency across image mediums.

This is the crux of the problem in our era. Everyone on a MacBook Pro or an EDR display or out in print requires that the creative choices carry across to different mediums. Otherwise you’d need to author one image for every medium!

Further still, if it were all simply about “adjusting everything” then there’s no problem to begin with; render in sRGB primaries and simply tweak to what you see coming out of the display!

Sadly, it’s a tad broader of a problem than that.


  1. Subject to image formation versus image replication intention within the medium capabilities.

The two further questions you brought up makes sense, my understanding is that even if I have this image that holds this so called spectral data, it doesn’t mean or connects to any real world unit or how pixels should be illuminated. (Right? If so, then could there be let’s say an arbitrary mapping that specifies “what the hell is a 1.0”? Or if that’s not what you meant then I don’t think I understood correctly).

One thing tho at least for the current state of image technology, it’s almost a must to have image making being dictated on the final medium, which is dumb I know, but I don’t really see a case where e.g. a emmisive medium can be translated to a reflective medium and “give the same visual feeling”? (Maybe yes then probably you need to specify all related physical properties in those mediums as well as the viewing environment?)

For the “sensation of a color” thing… I do agree that plain tristimulus data doesn’t carry the “sensation”, but there’s also the fact that just like white balance in photos, you can shift the whole thing around and a red will appear a red because “the relative chroma(? I’m really bad with tech names) of those color patches stays”? (Or… then I may assume under current color tech, this transformation doesn’t preserve that kind of relationship in any meaningful way which then becomes a problem?)

(And then do we have a definition of what this tool, which supposedly should enable perpetual color translation onto all mediums, does?)

I still don’t think I fully understand the problem but still quite curious on it… Hope you guys don’t find me irritating because my head doesn’t take technical terms that well. :sweat_smile: