Thoughts on making Cycles into a spectral renderer

1000% - I am also an ‘Octane for Blender’ user watching this develop like a hawk :eyes:

This is the only official reference I’ve seen for this, but it looks like its being folded in :smiling_face_with_tear:


Hi, it’s great to see the recent progress and excitement around Spectral Cycles, it’s very heartwarming to see things progress even when I haven’t been able to dedicate time to it.

I’d like to try to find a structure which allows me to continue to help push Spectral Cycles forward while also empowering the contributions of others to the project. I think 4.X will be a great time for Spectral Cycles to land, especially considering AgX is now in (it seems like it should handle extreme lighting conditions much better than pretty much anything else out there - not dissimilar to how photographic film responds to spectra).

I’ve recognised that contributing code directly is not the most effective use of my effort because I’m really not familiar with C++ development at all, I don’t know the Blender codebase and patterns very well, and since I don’t have much time to contribute to it, each time I open it back up I spend a majority of my time re-familiarising myself with the changes and just merging in recent upstream changes.

I think what I can do is act as a project manager of sorts for Spectral Cycles. I understand spectral rendering and the concepts involved in implementing it relatively well, I understand how the various pieces fit together, and have the historical context of the project. I think helping accelerate development by @pembem22 and more recently @weizhen (who I just reached out to) is a great way of spending my time.

I’d love to help coordinate anyone else who would like to contribute - ideas, designs, future direction, quality and performance test scenes, education material, hype reels, all of this is part of making it successful in my opinion. I think the most urgent need is currently developers familiar with Cycles who are able to help with the remainder of the well-known tasks to get a complete and bug-free implementation of Spectral Cycles. Once we’ve completed step 0, I feel like the surface area for contributions increases significantly.

If you would like to help out and have an idea of what you could contribute, please reach out to me.

I’m excited to continue seeing this project develop and hopefully land in the not-too-distant future. It’ll be a great improvement, immediately giving quality improvements to every render and enabling looks which were previously very hard/impossible to do, and will open the door for a whole lot more exciting changes down the line.


Sorry for the late reply.

I haven’t been following Cycles development lately, can you clarify what reworkings would allow that?

I loved contributing to the Spectral Cycles project and seeing all the community around it, so I’d like to continue doing that. I’ll check the state of the old branch to see what can be done with it, so we can decide what’s next.


In my understanding at least, stuff like the more flexible types throughout Cycles/Blender, that don’t assume Float3s or Float4s but can also work with spectra.
I might be wrong about that, but changes like those ought to help make things more cross-compatible with less duplication.

1 Like

Introducing the Spectrum type to the main branch does immensely reduce the code differences between the branches and simplifies the merging process. But that doesn’t solve the problem with duplicated kernels. While the code used to compile RGB and spectral versions is identical, the resulting binaries will be different as the underlying data type will differ throughout the entire kernel code.


That might be a question for Brecht, then.

Like, not just for him, but he presumably needs to be in the loop and might at least have some ideas of how to facilitate more overlap in the kernel. (Assuming it’s possible at all)

There probably are going to be legitimate reasons even in the future not to use spectral rendering, in particular when it comes to NPR (though most of that is gonna be covered by EEVEE, we can’t expect every NPR workflow not to work with Cycles) where you might need three distinct independent color channels for the purposes of post processing, so I think it should be able to be turned off with a checkmark, ideally.
I can even, in principle, imagine weird complex combined workflows where you want to work with both three (or more) independent channels, and spectral rendering on top, but that’s gonna be very very niche and, in the worst case, possible by rendering multiple times.


I of course don’t know all the details, or even the rough strokes, but most of what the kernel does presumably ought to be identical, right?
The only difference is how colors are being sampled. But all the, like, ray bounce stuff and keeping track of the various passes and all that ought to be identical.
The only thing that changes is the part that effectively multiplies up colors, looking up a list of spectral values at the target and multiplying by the randomly sampled Hero Wavelengths rather than looking up a triplet of color channels of the material and then multiplying by the three color channels of the current light.
Plus an additional lookup that maps wavelength to color I suppose.

1 Like

The thing is, the entire code of the kernel is inlined during compilation, which means that color math operations and RGB/spectral conversions are duplicated in every place they are used. This is done, so the compiler can optimize the resulting binary code as much as possible, combining and/or rearranging with other nearby operations to achieve maximum performance. As a result, different parts of code can be mixed together in different ways, even though the source code used is the same, and at that point it’s not possible to replace RGB operations with spectral ones, or vice versa.

Also, having code check in runtime if it’s running RGB or spectral rendering and choose an appropriate function will result in a performance hit. It doesn’t allow the compiler to make the aforementioned optimizations and will add overhead.

ah yeah that makes sense. That sucks…
There may then be no way around this.

Except maybe, would it be possible to split the current “Composite pass” into a spectral and regular RGB pass?
Something like it assuming that both types of rendering happen but then somehow mask out the one that’s not needed?
I’m not sure that’s A Thing. I just know that GPUs like “doing it all but masking” more than elaborate if-thens. No idea if that sort of approach (if it’s even possible) also translates to CPUs.
This also seems more complicated a difference than would usually be the case in like compute shaders, where the difference isn’t completely separate datatypes but rather “same type in principle but just don’t do stuff with this field pls”

You could provide a virtual method table (VMT) for specialized functions as part of the core renderer configuration. Cycles-RGB and Cycles-Spectral are then identical including the call to the specialized functions, but the calls end up at different implementations. This is kinda standard OOP programming, but implemented manually. Using the VMT, there would be near-zero code-duplication.

On the other hand (and even better): if the specialized functions are not too large, you can simply rely on CPU/GPU-branch prediction which is superb in recent hardware architectures. If-statements are practically transparent / no-op when they always resolve to the same branch.

But of course: try it and measure.

1 Like

The problem with branch prediction here is that while the branches are free, they will likely interfere with vectorization.

I accidentally figured out a technique in Vanilla 3.6 Cycles - Where if you shift the hue of your light sources (two complementary pairs) - You can composite your own manual Spectral Pathtrace!

1 Like

a more sophisticated version of this was actually how @smilebags way back when got started on the spectral branch iirc.
Initially he had a script that would monochromatically render the same scene many times based on various material’s spectral properties, and then compose it all together with the correct color weights.

What you did here could hardly be called spectral though.


One could use a methods like @Valor_Cat and @kram1032 mentioned above but the code then becomes very slow. Assuming proper integration over the spectrum (e.g. using hero-wavelengths), I now see this balance when viewing the code as a tree: putting if-statements near the leaves limits the possibilities for vectorization while putting if-statements near the root causes lots of duplication by the compiler. Somewhere in between is the optimum. Once you decide that you want to support both RGB- and Spectral-Cycles within the same compiled binary, the need for finding this optimum presents itself. Unfortunately, with every large (structural) change to Cycles, you will need to find the new balancing point - a (say) yearly task for a capable developer.

Providing separate compiled binaries for RGB- and Spectral-Cycles may thus be more sustainable after all.

Alternatively, one could understand RGB-Cycles as a special case of Spectral-Cycles where the wavelength-sampling algorithm repeatedly samples the same channels 0,1,2 (e.g.: R,G,B). Then there may be no need for if-statements nor code duplication and no need for RGB-to-Spectral conversions. Or is that what @smilebags tried already?

Yeah, there are two problems with the individual channels approach:

  1. it’s slooooow. You need many wavelengths. Ideally infinitely many (or at least the ~450 nm worth of single nm step channels we can see).
  2. it’s biased. You aren’t actually guaranteed to converge against the correct color.

Hero Wavelengths can achieve this with very few wavelengths per sample. In principle I think you could use just three, which would mean the exact same load as RGB rendering. Though in practice I think we settled on 4 or 8 in order to reduce color bias, and this still can happen very fast.

I suppose this were so if we chose just three wavelengths. (I’m guessing part of the issue is that we need to be slightly flexible to the number of channels sampled? - Not per sample, but certainly per run.)

I saw @weizhen mentioned something called CMIS

The link also leads to a video that contains a comparison between Hero and this new method:

I had little time devoted to the spectral_cycles branch, currently there is only dispersion and nothing else:

I also won’t have any time for this in the following 6 months because I’m visiting Weta.
We will surely using Hero wavelength sampling. CMIS in the context of spectral rendering is just a revelation that for hero wavelength sampling one doesn’t need to space the 4 wavelengths evenly, but to sample each one individually with the favored pdf/technique, which reduces noise even further.
4 channels are used simply because of one can make use of SSE instructions. 8 can also be used but 4 is usually enough and the color noises go away relatively quickly.
The time cost of light transport with spectral data in comparison with RGB should be negligible. But it does add some cost to the memory: it appears we are just changing from 3 channels to 4 channels, but it’s actually 4 wavelengths + 4 intensities of each wavelengths + 4 pdfs (RGB only needs float pdf).
It surely would be easier for the users and for compatibility reason to add a checkbox for switching between spectral and RGB at runtime, but we’ll have to measure the cost. If later proven to be mature enough I’d consider keeping just the spectral.


I guess I had already thought of and implemented the concept behind CMIS in the context of spectral rendering without giving it a name. Nothing in the math seemed to suggest that you couldnt spread the additional wavelengths uniformly amongst an importance weighted distribution of wavelengths. That has been implemented for quite a while now in the spectral cycles branch.


honestly the necessity of uniform sampling was, to me, the weirdest part of the original hero wavelength sampling and it makes a lot of sense that you can do better

Didn’t consider that, thanks! So you’re talking 12 floats instead of 3, which is presumably really not easy to square in a unified way at the hardware level.

I do think properly separable/independent RGB rendering has its place, especially for NPR applications. Like, unless you use weird material constellations, or specific material effects like some approximate dispersion / thin film effect models, there is not going to be any crosstalk between the channels, right? It’s effectively like rendering three separate monochrome renders. And that could presumably be leveraged for certain NPR effects, where you effectively get three channels you can treat separately to your heart’s content. Although I suppose most of those are likely gonna be using EEVEE.
If it’s not too much of a performance impact, my personal ideal would be “Spectral On By Default, RGB On Request”…

It also mentions path guiding support which would be huge. Presumably some path guiding could be done specifically to what wavelengths to pick in the same way it’s used for figuring out what paths to explore more carefully?

This is an exceptionally well made video on the topic. Very clear and easy to follow. There also is a follow up for sampling even more complicated settings from last year:

Doesn’t seem to directly be useful for Hero Wavelengths but it does show off some possibilities of where Cycles could eventually be headed…