Thoughts on making Cycles into a spectral renderer

The reason why hero wavelength sampling does uniform spacing is to be able to compute the correct pdf because you can factor it out. CMIS proved that even if each of your pdf from the wavelength is biased (compared with the actual marginal pdf we need), after you collect many many samples the result eventually converges to the correct pdf.
The conclusion might sound intuitive but it’s nontrivial to prove.

The TODO in the commit message simply means that path guiding require a float pdf type, but now I have a float4, so I need to figure a way out to approximate float4 with float at the many many places where path guiding is used.
I am indeed thinking of using path guiding to pick wavelengths, that’s effectively a combined importance sampling. I heard there is already works on it, but I didn’t have the time to check them yet.

5 Likes

This is literally the most incredible excuse ever! “Sorry I can’t attend jury duty just yet - I need to go hang out with the world’s top ethics Professors for 6 months to make sure I make the best decisions”

3 Likes

I wonder whether using the PDF of the hero (first wavelength of the 8) would be sufficient, considering most materials aren’t going to have properties like dispersion where the path distribution depends on wavelength.

1 Like

This video is an entire conference, several talks worth of various problems and solutions that come up in spectral contexts. Some of them are concerned with rendering, others with life action workflows.
Talks contain folks from Nvidia, Netflix, Weta, and Unity.

One particularly interesting one involves spectrally modelling skin, and I think it’d be great to have that spectral skin shader in Spectral Cycles eventually, to give easy access to natural results.

3 Likes

Absolutely agree. This is the sort of thing I was thinking of when I mentioned that the surface area for exciting improvements expands immensely once the initial implementation is merged.

1 Like

Just watched the first lecture, interestingly they called the RGB → Spectra process “uplifting”, I have heard several terms for this already, “upsampling”, “uplifting”, “reconstruction”, “recovery”, is there a standard term to use? It’s a bit confusing.

This also reminds me, I hope we can have an RGB → Spectra in spectral Cycles that doesn’t hardcode any specific RGB colorspace, the one in previous spectral branch was hardcoded to Rec.709 primaries, and the attempt to switch to something better seemed to be stopped by performance issue (EDIT: Just went back and checked our previous conversations here, it seems my memory was wrong, it’s not “performance issue”, but rather, “bugs”).

1 Like

I think using the method by Jakob and Hanika (2019) is the current plan? Which, I believe, is built on top of XYZ and therefore at least agnostic to primaries (though necessarily not to the white point I think. That would be impossible. At best you can try to fit a color to multiple white points, attempting to minimize the difference across several viewing conditions, rather than making it exact for any single one)

In colour-science they call it “Spectral Recovery and Up-sampling”
The idea with all the various used terms is pretty much the same though. I wish there was a single consistent term but it’s not like they mean wildly different things on the face of it.

2 Likes

I am 70% sure that basic inversion can be achieved, including a customized weighted locus as required, via inversion of MacAdam moments.

1 Like

No idea what exactly you are referring to, but there are two very important limiting factors for spectral rendering, since there are myriads of samples and corresponding color lookups at all times:

  • the method must be really fast
  • the method must be very light on memory

If inverting MacAdams moments is either slow or memory intense, it’s gonna be a no go.
Otherwise, I’m sure that’s a good direction to explore

1 Like

I wonder what the subsurface radius is actually doing in the spectral branch. Is it being treated as color and converted to spectra or what? Would the change of the scene_linear working space affect this process? Does it still make sense to be a Non-Color and yet RGB vector in spectral?

2 Likes

In the spectral setting, you have scattering spectra as well. The input would have to change from a vector to an unbounded spectrum.

That spectrum doesn’t really correspond to a color though. I mean, you can calculate what color that spectrum would correspond to, but that’s kinda just misinterpreting what that spectrum means.

In general, spectral rendering forces us to be a lot more specific with what various inputs mean. Emission, Absorption, Reflectance, and Scattering spectra are four distinct types with distinct meanings.
There is also Complex IOR which is is a complex-valued spectrum.
And then there is fluorescence which would require an entire function from spectra to spectra, presumably modelled as a large matrix.

I’m guessing anisotropy could also be made spectrally dependent. Not sure how commonly relevant that is though.

4 Likes

In theory - could ReSTIR/SVGF be implemented with Spectral Rendering? Radeon ProRender implements these with a ludicrous speed boost :face_with_spiral_eyes:

Are these a compromise in image quality? Cause these + Persistent Data would totally make Cycles the world’s fastest Spectral Pathtracer.

4 Likes

Probably.


Just for reference, rendering with ReSTIR and SVGF is almost always slower than rendering without it. The main benefit is the noise is reduced compared to equal time renders without these, or similar features.


As far as I can tell, ReSTIR is a technique to help in picking better paths for a ray to follow to reduce noise. It relies on temporal (information over time) and spatial information (information over pixels/space) to do this.

Due to the temporal nature of it, if anything changes in the scene, then the “better ray usage” in certain parts of the scene will become less than ideal (but still typically better than not using ReSTIR).

If I recall correctly, ReSTIR also comes in two modes. Biased and Un-biased modes. The biased mode is not physically correct, but typically has reduced noise compared to the un-biased mode. If Cycles was to implement this feature, it would likely be the un-biased one.


SVGF is just a denoiser. All denoisers have some sort of compromise. Whether or not it’s better than OIDN/OptiX denoising comes down to the use case and scene.

3 Likes

I mean, that’s how all of these sorts of algorithms go. Generally speaking, you either do more work to figure out smarter samples, or you cheat a bit, introducing bias, and it might be slower per sample to do that (especially for the first variant free of “cheats”) but the image is gonna be usable much sooner.

1 Like

@Skye_Fang
so “RGB spectrum” kinda doesn’t make sense. Cycles is not (yet) a spectral engine (though some people are working on this as you can see in the history of this thread) but RGB colorspaces are independent of spectral rendering. In particular, the most common sRGB colorspace doesn’t even have spectral primaries, so “pure” red, green and blue in that colorspace does not correspond to any sort of wavelength at all!
And there are, in fact, infinitely many different spectra that will produce the same color impression as any of these primaries.
Since the sRGB primaries aren’t pure, the least amount necessary would be two different wavelengths per color, though more likely it’d be quite a bit broader an emission spectrum than that.

Instead, in regular Cycles, as you said, there is just one single ray which gets evaluated across three channels. For non-spectral rendering, that’s enough and, indeed, preferable, as that way you can completely avoid any and all color noise.
Once spectral rendering is in, you’ll presumably just have an attribute “wavelength” in nanometers or micrometers, and can use that for your lenses.

3 Likes

I’ve been thinking about RGB-to-spectral conversions some more and I think I kinda figured out a way to, oddly, have a metamerism free model of color that’s none the less spectral.
It’s pretty strange though, and I have no idea what it’d end up looking like. Probably actually pretty cartoonish.

It’s a fluorescence-based model, but with a very very simple rule:
Any wavelength gets converted to the full target spectrum (scaled in such a way as to preserve overall energy, so very low energy colors need very high intensities to appreciably get converted, whereas high energy colors will make the corresponding color glow appreciably)

No matter what light you’d throw at a fluorescent material like that, it’d “reflect” back the exact same color, just scaled in terms of overall brightness.

I’m guessing using purely that would actually turn out to suck for most situations. But perhaps it’d be possible to combine this fluorescence based method with something purely reflectance based to allow for some fun unique effects.

Note, none of the old spectral builds were able to handle fluorescence, and I’m not sure what’d have to change with the sampling to make that work well (normally in Hero wavelengths, you’d just stick to a single color throughout one sample, but with fluorescence you kinda have to account for the possibility of a shifting color too and I have no idea how to do that efficiently), so I can’t actually attempt something like that as of right now.

Its a material property right? Fluorescence materials have a lifetime of exponential decay how long and strong the radiation is.

Timing doesn’t matter at all there. We can assume a very fast decay time, meaning essentially instant re-emission, effectively making it a regular reflection.
If we want fully realistic phosphorescence, that adds extra trickiness. At that point you want an actual physical simulation that times out how much exposure the material gets and how it emits over time and what not. - Realistically in most artistic applications, you’d probably just wing it rather than applying some sort of physics simulation that has to actually consider render results which in terms have to take into account the physics simulation. Like, that sounds like a horrid thing to sample accurately lol

The issue is different though: I’m just assuming 100% of the light that hits the surface gets reemitted instantly (but preserves total light energy) in any arbitrary spectrum.
So there might be a violet light source hitting the surface, and it glows in some sort of red.

But how would you actually keep track of that? - Hero wavelengths sample one set of wavelengths at a time. But if you are supposed to sample red light right now, the violet light source wouldn’t even get noticed. And if you are supposed to sample violet light, while the violet light gets noticed, no violet reflection occurs, so it couldn’t be tracked further.

I’m sure there’s some sort of trick to do it anyways: There are some renderers that can actually handle this. But I have no idea what it is.

EDIT:

Apparently, according to this, https://graphics.cg.uni-saarland.de/courses/ris-2021/slides/Spectral%20Raytracing.pdf the trick is to fall back to single individual wavelength sampling with an appropriate switch, making this kind of material way noisier as it doesn’t benefit of Hero Wavelength Sampling…
I wonder if this can be fixed

This seems to be a material/shader property as well.The shader programming is where you tell the render how it should behave if it gets emitted etc.

You mean the Hero wavelenght transport is allready calculated as RGB that you lost the wavelength possibilitys?

No, the Hero Wavelength is just a set of like 8 fixed wavelengths per sample step that get calculated simultaneously, basically.
But the issue is, that only works if the re-emitted wavelength happens to be the same as the input wavelength. Or perhaps it could be bent into the re-emitted wavelength happens to be one other of the wavelengths in the set.
Neither of which is sufficient to render out arbitrary fluorescence, as the wavelength would actually have to change over the course of a path.

Converting to RGB only happens at the very end, when saving a sample in an image, so it doesn’t matter at all here.