Thoughts on making Cycles into a spectral renderer

What build are you using? I haven’t tested the branch in a while, so there might be some issues.

I am still using the August 2021 build on GraphicAll, not sure if this is still the case in the latest version. Well… This is one of the reasons why I have been waiting for Smilebags’ new build.

Good news and bad news. Good news is that I have been able to get debug builds working again, bad news is that it fails to run the ‘release’ build.

4 Likes

Nevermind, turned out to be the new config does not have an XYZ role, solved by copy & pasting the Linear XYZ IE space and the role to the new config.

2 Likes

A question about the new spectral reconstruction in the lastest spectral branch (that I haven’t got a build to play with yet), does it mean that the spectral reconstruction now does not hardcode BT.709 primaries anymore? If the image texture node has the input colorspace drop down set to P3, the reconstruction will reconstruct the color from P3 instead of BT.709?

Just putting this here guys, in case you missed it (from the recent rendering meeting)

  • Christophe Hery (Facebook)
7 Likes

This is a deeper and challenging question.

Given that there is an assumption that when three channels are approximately reconstructed to a spectral distribution, there’s a need for the spectral distribution to contain a sampling of all spectra.

If this condition is not met, the reconstruction will end up with “gaps” in the resulting spectra. That means that all of the spectral “goodness” folks expect, would be missing; the spectral interactions would be interacting with the gaps.

As we “widen” the RGB footprint in terms of spectral locus area in terms of the CIE xy chromaticity diagram, we implicitly force ourselves to spikier and sharper spectra, reducing the value of the spectral reconstruction from RGB!

I am sure a local minimizer approach could likely find the maximal gamut RGB footprint that produces a sum of R+G+B yielding a flat Illuminant E spectra, but I sure haven’t done it!

Hope this helps to illustrate the tricky nature of the RGB working space choices involved here.

1 Like

Right, and eventually if the input is BT.2020 the primaries would just be three spikes of single wavelengths, and the scene response would be extreme.But I think the “spikiness” is also a part of the spectral feature, like metamerism etc.

Hmm is that how it works though? Individually reconstruct the three primaries and try to mix the three spectra to Illuminant E? Not sure if that’s how I understand it.
From the link Pembem posted:

colour.recovery.XYZ_to_sd_Jakob2019
Recover the spectral distribution of given RGB colourspace array using Jakob and Hanika (2019) method.

It reads to me that it would first convert the triplet of a specified RGB colorspace, to the XYZ space, and then reconstruct it to spectra.

illuminant (Optional [ SpectralDistribution] ) – Illuminant spectral distribution, default to CIE Standard Illuminant D65 .

And there is a parameter for white point as well, nice.

So it seems to me that it should respect the color input’s colorspace. I want to comfirm this since I believe we used to use Burns work that was hardcoded for BT.709 reflectance.

I’m not sure whether the new spectral upsampling model @pembem22 worked on implementing covers a footprint larger than sRGB, it is based on a lookup table so the bounds of the available gamut for upsampling is defined ahead of time. We had some troubles with that approach so have reverted to summing spectral sRGB primary ‘lights’. This means we’re still limited to converting RGB colours to spectra in the sRGB gamut. Other means of specifying a spectrum still allow for arbitrarily saturated colours (such as the spectrum curves node).

I have an idea for a unique method of representing smooth spectra with 3 values, but I haven’t validated the idea so it may not perform well enough to be practical. For the initial patch we’ll most likely be sticking to the sRGB-based light-summing approach for spectral upsampling. All approaches I’m aware of will also require the use of a lookup table, and there is some gaps in my knowledge around creating a lookup table that covers the entire visible gamut and is fast to sample.

6 Likes

Yes, this method allows to reconstruct even the entire visible gamut, though it won’t be perfect. You can find all of that in the paper. In that case it’ll be hardcoded to XYZ and OCIO will perform all required conversions from any color space, at least that’s how I imagine it’ll work.

Right now this reconstruction method is disabled because there are bugs, but you can easily enable it in the code by changing 1 to 0 on line 184 in intern/cycles/kernel/util/color.h.

2 Likes

As per @pembem22, this is 100% correct. Working through this, it would be reasonable to subdivide reconstruction into at least two categories:

  1. Selecting “within” a given CIE xy footprint. For example, it is entirely reasonable that someone may wish to constrain their reconstruction to a specific as-though-they-were-in-RGB domain. In this case, I strongly believe that summing to equal energy remains an important facet for individuals interested in a rapid reconstruction that maximizes spectral overlaps / effects.
  2. The general reconstruction for within the entire spectral locus, gaps or otherwise.

The least slope approach has been around since Smits at Pixar. Given that internally the original reconstruction was illuminant E, I would say it’s not quite the same, but identical general approach. I firmly believe Illuminant E is critically important to keep the biasing out of the math.

The larger issue really is a reasonable mechanism to build-spectra-yourself, without the overhead of a full blown spectral distribution curve.

I believe that it is possible to have a very fluid spectral UI design that is of a general close proximity to the existing paradigm of an HSV wheel.

Imagine an HSV wheel with “pegs” that can be moved around the outside. Exactly like the standard triangle inside ring versions that are already out there.

image

This covers “hue”, except we can think of the circle as being a normalized version of the locus. There’s a means to derive this as well, colourimetrically speaking.

For “saturation”, consider the pin position as the central wavelength of a Gaussian. The Gaussian width would be the “saturation” slider, where maximal “saturation” corresponds with purity, and minimal would represent a maximally FWHM Gaussian, essentially a flat illuminated E.

For “value” it would be the emission strength, or amplitude, of the given chosen model. EG: Could be closed domain image range, open domain general emission range, or something else.

So for a “simplistic” baby step in a direction that allows image makers to transition, this could at least seem like a mental-model friendly step in the right direction. And for more sophisticated spectral compositions? Just a matter of enabling multiple “pin drops” around that outer ring. In terms of palettes of mixtures designed, I believe that’s three floats per pin position:

  1. Dominant wavelength or compliment for purples.
  2. Width of Gaussian.
  3. Amplitude of emission.

Here’s a very rudimentary mock-up. The history scroll and swatch sample were for another design, but the general idea should be communicated within this.

Slight tangent, but it makes sense to explore user interface ideas, as the future is right in front of us. Having some general ideas out there might help someone to bridge to the next steps.

14 Likes

Just did some test against the August 2021 build from Graphic All. I don’t understand why but I think it is actually successful in reconstructing the P3 gamut:

Not sure if it’s my workflow being wrong or something. I had the color sweep texture’s input colorspace set to P3(OCIO stanza copied from Filmic Blender on GitHub), and the working RGB space was still BT.709, and had the texture be an emission plane, and the camera was right above it. And then Turned on the spectral switch and render. I used the new Convert Colorspace node to convert the render result from the working BT.709 space to XYZ (the Resolve diagram seem to have problems with negative values), and then export it to Resolve (timeline space set to CIE XYZ) for the CIE xy diagram. And boom, it roughly matches the P3 triangle!

Not sure why but if I am not doing it wrong here I think it just happens to work!

It will ‘work’ in such a simple case, but only by taking advantage of negative emission. I suspect the way objects of those colours would interact with a scene may be somewhat unpredictable. The goal is to be able to cover a larger gamut while staying within 0-1 reflectivity across the spectrum

2 Likes

Right, I understand it doesn’t work as good as how a potential “final solution” would. I am just happy that if AgX decides to use P3 as working space, it will not cause significant problem for the spectral reconstruction.

So to summarize where we’re at as I understand it right now:

There are roughly four different but related topics covered in this thread so far:

(click or tap to expand)

1. Spectral Rendering via Hero Wavelengths

This is the base package, the minimum viable product.
Unless something changed, we are quite close to done with this, however a few shaders require some love. For instance the hair shader still does not work correctly.

2. Spectral upsampling

This is important to make it possible to use regular image textures as color input for shaders, so it’s also gonna have to be in the base package.
We currently have a method that works adequately for sRGB color gamut. Anything beyond that will not work correctly, featuring wavelengths of negative amplitude. For an initial release this probably would be enough though.
That being said, there is a superior replacement, drastically expanding the gamut of possible input colors, already being worked on, but it’s currently buggy.

3. Filmic Update

This does not really affect the rendering process. It probably is not necessary as a first release as the current Filmic would be somewhat fine for now. - It only matters for final images. If you intend to do your own processing of exrs in compositing, this may not come into play at all.
However, if no compositing is desired (or you want to apply the Filmic LUT to your result after compositing), implementing a new version of Filmic that can deal with extreme saturation would be a huge improvement for high saturation scenes that currently would throw away information by clipping instead of attempting to smoothly compress the image to the target range.

It seems to me we would actually need multiple versions of this for different target gamuts.
However, at first a version just for sRGB, replacing current Filmic, would be fine.

this turned out to be wrong. See:

Really, imo, it would be great if we had like a LUT editor in Blender for proper lookdev work. There is a lot of artistry going into designing a LUT and Filmic is, as I understand it, designed to be a solid neutral default, attempting to give “good” results no matter what. As such it’s important as it will inevitably be the most commonly used LUT (much like current Filmic is the most common choice).

There is an early beta version of wider gamut Filmic. - I have not tested it myself yet but early results look promising if maybe a bit pale. (Somewhat inevitable I suspect, as compressing saturation is the whole point, so I don’t want to overstate that without first doing my own tests)
Can an OCIO config and such be hidden behind the Experimental Features flag? If so it could be nice to bundle this preliminary version but warn, that it is subject to change.

All that said, it’s not strictly about Spectral Rendering and could be seen as complementary. I don’t think a first release of Spectral Cycles would necessarily need this bundled.

4. UI for Wide Gamut colors or spectra

This is concerned with letting people actually pick colors outside sRGB using color pickers, or fiddle with spectra directly.
Some of that just amounts to exposing spectra as values and a lot of this has already been done (though I think the latest version stripped out some functionality for the sake of maintainability?)
But other parts are trickier. It would be good to be able to distinguish metamers for instance, and it would also be nice to have a few ways to pick high saturation colors beyond the gamut of sRGB in a decently intuitive way.
IMO most of this can be postponed until the core is in though. In fact it may well be easier to wait until basic spectral support exists and we get more people testing all this stuff and giving proposals accordingly.
Like, it’s fine to think about it already now, but it’s not unlikely that we aren’t even aware of all the use cases we ideally want to support until more people get a hand on this.

As far as I can tell, the only thing stopping this project from getting submitted as a patch for review right now is the remaining broken shaders in 1., right? All the other things should be possible to figure out later, in independent patches

4 Likes

That’s pretty much correct.

I think it’s also necessary to have spectral reconstruction not limited to sRGB, so that means the new method must be finished and included in the base package.

Another question is how spectral rendering should be integrated into Cycles. Replacing RGB rendering means it must be thoroughly tested and be production-ready, while having a runtime switch between those two requires doubling the amount of bundled GPU kernels and considerably increasing the size of the installation. Adding spectral rendering as a build option can also be done, but that means most of the users won’t use it anyway.

2 Likes

I think having it parallel / as a build-option for now is a good start though, at least getting a patch in for now, so there could be a first core blender dev review

5 Likes

I think we can have it as a run time switch for now. Size increase is expected I think, Blender has been increasing its size every release, so…

2 Likes

I don’t understand… Isn’t the different target space already specified by the Display Device setting in CM panel? Currently we have sRGB, Display P3, and BT.1886 these three display devices. Not sure what you mean by “multiple versions of this”

I would actually think of it as a separate but related patch. I think which one goes first would depend on which one finishes first. My personal idea is if we can have AgX in the master first, and Spectral branch merge it from master, then we can have it no problem. Having it first can also avoid the mass audience having negative first reactions to the glaring spectral skews. But again I think this depends on which one finishes first. It’s totally fine for Spectral to be merged to master first, since they probably will be separate patches anyway.

1 Like

Yeah, and Filmic as it stands, afaik, targets only sRGB. It’s not meant for gamuts larger than that. Current Filmic assumes colors encoded with sRGB primaries as source and sRGB as target.
The new Filmic will targets the full spectral locus as source and some color space, likely sRGB, as target.
( @troy_s please correct me if I’m wrong. )
That said, I suspect going from the largest (spectral) to the smallest (sRGB) gamut is gonna pose the biggest challenge. If that works reasonably well, all larger spaces are gonna work as well if not better, as less compression has to happen.

I don’t think Spectral Filmic has much of a point without spectral colors. As it stands, with current sRGB Cycles as source and target, making Filmic work for spectral colors would simply crush saturation unnecessarily. It will absolutely be good to have, but it only really will have a point once spectral rendering is actually in.