Thoughts on making Cycles into a spectral renderer

Hi,

I’ve created a workflow with which users can render spectral images with Cycles. At this stage, I’m not proposing to turn Cycles into a spectral render, but have been encouraged to bring it up here as a point of discussion.

I am aware Cycles is not trying to be a perfectly physically correct renderer, and I think how Cycles is doing things is good, it is simple to use and gives pleasing results easily. I don’t want to remove these aspects to Cycles, which is why I am phrasing this as a discussion rather than a proposal.

The workflow (with a lot of help from @troy_s) is now mathematically sound (produces the correct colours as per colour science standards), and allows users to choose RGB colours in materials (using a somewhat crude spectrum synthesis method), meaning the added difficulty in creating materials is actually quite low. In terms of performance, I expect there would be some slowdown, but from what my testing has shown, it isn’t as much as one might think. My test images rendered in roughly 1.2-1.5 times what regular Cycles could do them in, and that included a whole lot of unneeded BVH rebuilds.

My point of discussion is this:
Would it fit with the outlook of Cycles to have a spectral “mode”, which would trade some performance for the simulation of spectral phenomenon? From what I can gather, it doesn’t necessarily go against the Design Goals, but I guess it is up to the owners to determine whether this fits with where they want to take Cycles. Doing light transport calculations with only 3 lights is a pretty severe approximation, so in that sense, rendering spectrally actually fits in with the design goals.

I am confident Cycles is capable of becoming a spectral renderer with relative ease. Of course there are plenty of ways we could make the user’s life easier, which would take extra time. All of this will come with time, discussion and testing.

41 Likes

I’m fine with spectral rendering support being added to Cycles, as longs as we can do it in a way that doesn’t have a big performance impact on RGB rendering.

This can mean a few things, and each can be implemented independently:
1)Add features like dispersion and thin film to BSDFs. This can be done in individual BSDFs without deep changes to the code.
2) Spectral integration, converting to the output BSDFs/closures to a spectral representation, doing all multiplications in the integrator with this representation, and then converting it back to RGB at the end.
3) Spectral colors in shader networks. This is very difficult to fit in design-wise and not that important in my opinion.

From what I understand you are talking about 1) and 2). Adding for example dispersions to the glass BSDF or thin film to the principled BSDF may be relatively straightforward to fit in and localized.

For spectral integration, it depends a bit on how advanced the algorithm should be. If we have a spectral representation with 3 components, most code can just work the same as RGB. If it needs to be something else, then there is potentially a significant performance impact or the needed to template a lot of the integration code.

10 Likes

Thanks for the quick reply.

If we are to make two ‘modes’ in Cycles, one for the current 3 channel spectral rendering (RGB), and one for many-channel spectral rendering, the performance of one should not influence the other. The most significant piece of work I can envisage is in converting nodes to work on spectral data ‘transparently’ without the user needing to be aware of that fact.

If it turns out to be comparatively fast, using spectral sampling all the time is the other option, which would save on duplicated development down the track.

Unfortunately, while you can get reasonable (local) approximations of those effects in a single BSDF, doing so doesn’t give you a spectral render.

The reason I say local approximation is that converting from spectral to RGB, you lose all of that extra information. Dispersion for example, outputs pure wavelengths of light. If your ‘dispersion’ glass BSDF is lit by daylight, the intensities of each wavelength will be different to each other. Each refracted wavelength will match the intensity of the light source. If you were to light the same glass with a yellow laser (single wavelength) you should only get a single refracted ray. If that yellow is represented by R: 1, G: 1, B: 0, then you’d have to make up the spectrum from that, then you’d end up with some fuzzy red-yellow-green refraction.

In order to make a spectral render, rays need to have a wavelength associated with them for their entire lifetimes, including when they interact with multiple materials. That is the fundamental change which would need to occur in Cycles.

The simplest method to do this is to render the entire scene x times (I chose 36) treating each image (they only contain luminance, no RGB data) as the representation of the scene under that wavelength. Taking the result of those renders, you can sum them up and convert them to RGB.

1 Like

As your demo has shown, there is literally zero difference between a three channel and n channel raytracing approach.

See above; there is very little difference between “RGB” and spectral, and instead amounts to “Change Cycles from a fixed three channel raytracing engine to a variable n channel.”

About all that needs to change is to move away from fixed arrays of three floats, to an n series in a stack. The rest is merely metadata, and allows the work to scale to needs. That is, if three spectral components are sufficient (EG: REC.2020), use three. For other effects, use as many as required. This is I believe a similar approach to what Manuka does.

It becomes a metadata issue on the inputs and outputs at that point for the conversions from and to RGB colourspaces[1], and less about exhaustive changes to the raytracer.

[1] Including spectral upsampling via Hero Wavelength or Scott Burns curves etc.

3 Likes

The only significant difference I can see right now is that REC.709 primaries are not monochromatic, so anywhere in the cycles pipeline (materials) where assumptions are made about how those channels work will have to be reviewed. A RGB 1,0,0 metallic shader assumes that no green or blue light is reflected, but as a REC.709 red light source isn’t monochromatic, that assumption can not be true.

I think you are right that we would get a lot of the benefit and not change the user facing API at all by doing what you suggested and just replacing the 3 channels with n channels and having a step at the end to interpret that. That could lay the foundation for any type of spectral rendering, as long as the method used is spectral binning rather than Monte Carlo spectral sampling.

Yes, but it’s not entirely obvious how to do those two modes. Other renderers often uses templates to avoid performance impact of branching at runtime, but this is not as easy for us since we are limited to C in the kernel due to OpenCL.

Further, we currently benefit from SIMD when manipulating colors, and ideally we can preserve this. Longer color arrays (if needed) take up more stack space which will negatively affect GPU performance.

It doesn’t of course, but improving the BSDFs and spectral integration both requires a significant amount of work and can be done independently. Users should still be able to use e.g. dispersion with RGB integration unless we can make spectral fast enough that the choice is no longer needed.

Right, but we should be randomizing the wavelength per pixel per sample to make it work better with interactive progress rendering and adaptive sampling / denoising.

I’m familiar with the spectral algorithms, as usual what is simple in theory has more complicated implications, particularly for performance on the GPU. For algorithms like Hero wavelength sampling we also need changes to importance sampling in e.g. SSS and volume rendering.

1 Like

This is exactly the invaluable work @lukasstockner97 has done in fixing Cycles nodes. In particular, Blackbody, Sky Texture, and Wavelength. Making the system agnostic is the key point.

I don’t believe this detail matters much on the raytracing side; it just crunches intensities of the lights not caring what lights they are. For example, REC.2020 spectral coordinates crunch just fine using the exact same renderer.

I am not convinced “modes” are required at all. That is, start by making Cycles fully spectral compliant at three components (aka fully agnostic), then tack on optional sidecar metadata. There would be no difference for example, between assuming REC.2020 primaries via traditional OCIO and the sidecar approach which defines a single piece of metadata for three channels that defines the positions of the primary in question.

These areas I have little experience in the operational flow of. I would still think that a majority are energy agnostic?

Manuka does not appear to use Hero Wavelength either, and rather something closer to Scott Burns it would seem. Rather tangential issues when we are talking about the flow inside the renderer though I believe.

Yes, if there’s little to no performance impact by doing so, choosing the bin at random (or choosing the bin based on a weighting function) would be best aesthetically and in terms of integrating with the current system.

From a few seconds of googling, it doesn’t seem like SIMD is limited to 3 dimensional data. If GPU memory becomes a concern, there might be some ways to deal with x channels at a time (balance between simd utilisation and memory usage) and the data for the ‘inactive’ bins could be stored elsewhere. I’m not familiar with the details of this, so evidently this is something that’ll need to be worked out before going too far with this.

You’re right, the raytracing side of it doesn’t care how the material is made. The materials side of it might also need some work is all I’m saying.

After a bit of thinking, I’ve come to the conclusion that the best method (at least initially) to approach is spectral binning, since that will work seamlessly with Troy’s idea of generalising the 3 channel nature of Cycles.

The other approach, Monte Carlo spectral sampling, is alluring but probably not worth it, as you can’t bin samples, each sample has a unique wavelength. That distinction makes some of the related math considerably harder, but there are also some advantages too. Using Monte Carlo would likely be a big overhaul of a lot of Cycles.

I think Monte Carlo sampling with 3 channels is by far the simplest method to integrate this in a way that is compatible with all Cycles features. It lets you implement this with localized changes in the rendering kernel. If you use N channels or correlated samples it has more complicated implications for GPU rendering and sampling algorithms.

Associating a different wavelength with each path is not that hard, it can be stored in PathState and used wherever needed in the kernel.

If that’s not all that hard, then a minimalist approach is to expose that paths wavelength property as a cycles material input, and then just converted to XYZ (or RGB if there’s a reason to do so) once the sample has been calculated. The issue with this is that it would require changes to how materials are made, or would require some rework in how materials are interpreted. Cycles treats each ray as 3 lights, but in spectral rendering, the output of a material isn’t a colour, but just a brightness. It just defines how that particular wavelength interacts with the surface. This is where the challenge comes in when trying to integrate RGB colour pickers into the spectral materials.

The first option of exposing the path wavelength to the user is simpler to build but requires the user to know how to create spectral materials. The other option is that each instance of a colour in a material is converted to a single intensity based on the path wavelength under the hood. That way the spectral resolution is separated from the material creation and sampling technique, and users are free to ignore the wavelength property if desired.

The thing with attempting to squeeze a spectral render into three channels is you’re not going to be getting any of the added colour fidelity if done that way. There are still some benefits but they are somewhat limited.

I actually believe both paths are overworking things.

Assuming Cycles is fully agnostic n renderer, there is no real difference between “as it currently is” and “full spectral” mode. It would come down to the metadata format.

Think about REC.2020, which has spectral based primaries. What would the difference be between generic non-spectral RGB and an n=3 with wavelength metadata? The answer is nothing.

That is, if our metadata were “channelA, channelB, channelC, metaA, metaB, metaC”, Cycles doesn’t need to know anything more.

This leads to a few cases which would seem solved elegantly via a simple “Render Type” toggle:

  1. Render type Spectral
    1. No metadata on buffer; upsample through method. This covers colour pickers and generic imagery. Upsample according to chosen n count spectral bins for rendering.
    2. Metadata on buffer; run pass for each channel as data is “in reference” state. For spectral materials provided as sample points, resample at chosen render channel / spectral bins.
    3. At tail end of pipeline decode according to CMF. Roll through traditional colour managment for sweeteners etc.
  2. Render type Generic
    1. Ignore all metadata. At tail end roll through standard three channel colour management.

This should permit variable spectral bins, and accommodate all scenarios including mixed generic three channel and n channel. The only interaction would be a UI element consisting of:

  1. Spectral mode
    1. Number of samples
    2. Spectral bin positions in NM. This is constant throughout a render. A shot requiring complex caustics / iridescence may require 30 bins. A simpler shot 3. No difference to architecture.
  2. Generic mode

The metadata for a REC.2020 equivalent would be something like:
[ChannelA, ChannelB, ChannelC, 630, 532, 467]

Note how this is no different for crunching along Cycles with a REC.2020 reference using “Generic Mode” where only ChannelA, ChannelB, and ChannelC is given; the sole difference being the very tail end of the render where one hands back n channels that roll through the CMF to the generic colour management, where the other simply goes directly to the generic colour management.

I think we’re mixing up different aspects here.

I was talking about how to implement 2) from my first comment. For spectral integration, we can Monte Carlo sample 3 wavelengths to integrate instead of RGB. There would be no difference in the converged result compared to 1 or N channels, the only difference is the noise.

Hero wavelength sampling found 4 channels to work well which is not too far off from 3, and if needed 4 is relatively easy to do in the Cycles kernel without much performance impact.

BSDFs would optionally output the spectral color directly based on parameters like dispersion, without any user facing design changes to how materials work.

Such settings are not needed. With Monte Carlo sampling we can cover the entire range with even 1 channel per sample. There would be color noise in the render, and more AA samples would resolve it.

Does that imply a performance hit? IE: Monte Carlo samples at maximum spectral locus bins (30-40) per pass?

How would spectral composition hold up through an entire render. This sounds like you are implying downsampling between the BSDFs, which is of course silly, and obviously not what you are implying.

Monte Carlo sampling with 1 channel means that for every ray we sample a different wavelength from a continuous spectrum. That’s exactly like a photon in nature, which has one wavelength. There is no downsampling.

With an algorithm like Hero wavelength sampling you attach a couple more wavelengths to the same ray, because it helps to reduce noise. If you use 30-40 wavelengths for one ray it gets pretty expensive though and there are diminishing returns, so it’s less efficient.

Right. So essentially we are discussing two classes of spectral rendering approaches:

  • Monte Carlo
  • Binning

You are leaning towards Monte Carlo.

What is passed between BSDFs in the instance of say, a 1 channel definition, as absurd that is for examples sake? The single channel at a resultant intensity?

The integrator would sample a wavelength, and then BSDF evaluation would return the intensity for that wavelength.

Okay so it seems like Monte Carlo spectral sampling using hero wavelength (variable count but nominally 3 or 4?) is suitable. What might be exposed to the user is a new Wavelength input node which has the current rays wavelength

This makes sense for hero wavelength, and allows the user to utilise the system with no changes to how they create materials. Implementation details can be hidden in individual nodes if desired. This seems like a good way to go.

Preprocessing and postprocessing steps might be as follows:
Each shader might create a table of spectral reflectance values based on a colour input so that it can 1. Utilise more expensive and accurate methods of spectral synthesis and 2. Not have to do that for every single wavelength it encounters.
There’s the obvious postprocessing involved in converting spectral data to XYZ to fit back into blenders pipeline.

The one unresolved issue is for advanced users who want to actually specify the absorbtion/reflectance/emission spectrum of a material. When hiding the spectral implementation away, with no changes to the current materials, you can’t create your own. While this isn’t an issue for most users, it is a nice-to-have.

Another is whether or not phenomena such as phosphorescence is possible. If a BSDF is able to modify the wavelength of the ray, this isn’t a problem.

We should not expose the current wavelength to the user. Shader nodes and OSL are designed to deliver a description of overall material behavior, independent of the view direction or wavelength.

For performance the most important thing is to have fast wavelength <-> RGB conversion functions in general. Table lookups or a function fit should be possible to make it fast, and only if this doesn’t work well enough for some reason should we worry about optimizing for fixed colors.

I wouldn’t worry about user defined spectra or phosphorescence at all at this point, it seems low priority.

I understand the reasoning behind this view, but I do think it limits the usefulness of having spectral rendering in the first place. Isn’t the ‘incoming’ vector, and the ability to determine different reflectances for red, green and blue already defining a material based on view direction and wavelength?

Being able to modify arbitrary parameters based on the wavelength is needed to create realistic effects, especially when recreating a physical phenomenon. Maybe there should be an input node similar to RGB curves, where it will output a value based on the ray’s wavelength. Things like material IORs change with wavelength, and usually a linear ‘spread’ isn’t the desired effect.

Think of thin film interference, accurate spectral dispersion (and absorbtion) of different materials, realistic SSS absorbtion distances, monochromatic and spectrally unique (flourescent) light sources. All of this is trivial with access to the wavelength, and either convoluted or impossible without. While these aren’t things everyone is going to want to use every day, I’ve seen enough people asking for these things that they’re definitely desired by a good number of people.

If it is exposed, there’s nothing requiring users to use it, they can still use materials as is and benefit from spectral sampling, but being able to drive parameters based on wavelength is a great tool for those who know how to use it.

The reason I bring up the preprocessing step for colours is that the current best known spectral synthesis function (Meng) is iterative and quite expensive. Getting from a single wavelength to XYZ or RGB is very easy, but synthesizing a spectrum from an XYZ (or RGB) colour is the challenging part. Running an iterative function every time a colour is encountered is likely to get slow. I guess we’ll find out when we try it out.

I can understand why this might be low priority for some users, but there are many valid reasons to have it, and I feel that if possible, it shouldn’t intentionally be ‘designed out’ of the system.

2 Likes