Thoughts on making Cycles into a spectral renderer

I wonder what the subsurface radius is actually doing in the spectral branch. Is it being treated as color and converted to spectra or what? Would the change of the scene_linear working space affect this process? Does it still make sense to be a Non-Color and yet RGB vector in spectral?

2 Likes

In the spectral setting, you have scattering spectra as well. The input would have to change from a vector to an unbounded spectrum.

That spectrum doesn’t really correspond to a color though. I mean, you can calculate what color that spectrum would correspond to, but that’s kinda just misinterpreting what that spectrum means.

In general, spectral rendering forces us to be a lot more specific with what various inputs mean. Emission, Absorption, Reflectance, and Scattering spectra are four distinct types with distinct meanings.
There is also Complex IOR which is is a complex-valued spectrum.
And then there is fluorescence which would require an entire function from spectra to spectra, presumably modelled as a large matrix.

I’m guessing anisotropy could also be made spectrally dependent. Not sure how commonly relevant that is though.

4 Likes

In theory - could ReSTIR/SVGF be implemented with Spectral Rendering? Radeon ProRender implements these with a ludicrous speed boost :face_with_spiral_eyes:

Are these a compromise in image quality? Cause these + Persistent Data would totally make Cycles the world’s fastest Spectral Pathtracer.

4 Likes

Probably.


Just for reference, rendering with ReSTIR and SVGF is almost always slower than rendering without it. The main benefit is the noise is reduced compared to equal time renders without these, or similar features.


As far as I can tell, ReSTIR is a technique to help in picking better paths for a ray to follow to reduce noise. It relies on temporal (information over time) and spatial information (information over pixels/space) to do this.

Due to the temporal nature of it, if anything changes in the scene, then the “better ray usage” in certain parts of the scene will become less than ideal (but still typically better than not using ReSTIR).

If I recall correctly, ReSTIR also comes in two modes. Biased and Un-biased modes. The biased mode is not physically correct, but typically has reduced noise compared to the un-biased mode. If Cycles was to implement this feature, it would likely be the un-biased one.


SVGF is just a denoiser. All denoisers have some sort of compromise. Whether or not it’s better than OIDN/OptiX denoising comes down to the use case and scene.

3 Likes

I mean, that’s how all of these sorts of algorithms go. Generally speaking, you either do more work to figure out smarter samples, or you cheat a bit, introducing bias, and it might be slower per sample to do that (especially for the first variant free of “cheats”) but the image is gonna be usable much sooner.

1 Like

@Skye_Fang
so “RGB spectrum” kinda doesn’t make sense. Cycles is not (yet) a spectral engine (though some people are working on this as you can see in the history of this thread) but RGB colorspaces are independent of spectral rendering. In particular, the most common sRGB colorspace doesn’t even have spectral primaries, so “pure” red, green and blue in that colorspace does not correspond to any sort of wavelength at all!
And there are, in fact, infinitely many different spectra that will produce the same color impression as any of these primaries.
Since the sRGB primaries aren’t pure, the least amount necessary would be two different wavelengths per color, though more likely it’d be quite a bit broader an emission spectrum than that.

Instead, in regular Cycles, as you said, there is just one single ray which gets evaluated across three channels. For non-spectral rendering, that’s enough and, indeed, preferable, as that way you can completely avoid any and all color noise.
Once spectral rendering is in, you’ll presumably just have an attribute “wavelength” in nanometers or micrometers, and can use that for your lenses.

3 Likes

I’ve been thinking about RGB-to-spectral conversions some more and I think I kinda figured out a way to, oddly, have a metamerism free model of color that’s none the less spectral.
It’s pretty strange though, and I have no idea what it’d end up looking like. Probably actually pretty cartoonish.

It’s a fluorescence-based model, but with a very very simple rule:
Any wavelength gets converted to the full target spectrum (scaled in such a way as to preserve overall energy, so very low energy colors need very high intensities to appreciably get converted, whereas high energy colors will make the corresponding color glow appreciably)

No matter what light you’d throw at a fluorescent material like that, it’d “reflect” back the exact same color, just scaled in terms of overall brightness.

I’m guessing using purely that would actually turn out to suck for most situations. But perhaps it’d be possible to combine this fluorescence based method with something purely reflectance based to allow for some fun unique effects.

Note, none of the old spectral builds were able to handle fluorescence, and I’m not sure what’d have to change with the sampling to make that work well (normally in Hero wavelengths, you’d just stick to a single color throughout one sample, but with fluorescence you kinda have to account for the possibility of a shifting color too and I have no idea how to do that efficiently), so I can’t actually attempt something like that as of right now.

Its a material property right? Fluorescence materials have a lifetime of exponential decay how long and strong the radiation is.

Timing doesn’t matter at all there. We can assume a very fast decay time, meaning essentially instant re-emission, effectively making it a regular reflection.
If we want fully realistic phosphorescence, that adds extra trickiness. At that point you want an actual physical simulation that times out how much exposure the material gets and how it emits over time and what not. - Realistically in most artistic applications, you’d probably just wing it rather than applying some sort of physics simulation that has to actually consider render results which in terms have to take into account the physics simulation. Like, that sounds like a horrid thing to sample accurately lol

The issue is different though: I’m just assuming 100% of the light that hits the surface gets reemitted instantly (but preserves total light energy) in any arbitrary spectrum.
So there might be a violet light source hitting the surface, and it glows in some sort of red.

But how would you actually keep track of that? - Hero wavelengths sample one set of wavelengths at a time. But if you are supposed to sample red light right now, the violet light source wouldn’t even get noticed. And if you are supposed to sample violet light, while the violet light gets noticed, no violet reflection occurs, so it couldn’t be tracked further.

I’m sure there’s some sort of trick to do it anyways: There are some renderers that can actually handle this. But I have no idea what it is.

EDIT:

Apparently, according to this, https://graphics.cg.uni-saarland.de/courses/ris-2021/slides/Spectral%20Raytracing.pdf the trick is to fall back to single individual wavelength sampling with an appropriate switch, making this kind of material way noisier as it doesn’t benefit of Hero Wavelength Sampling…
I wonder if this can be fixed

This seems to be a material/shader property as well.The shader programming is where you tell the render how it should behave if it gets emitted etc.

You mean the Hero wavelenght transport is allready calculated as RGB that you lost the wavelength possibilitys?

No, the Hero Wavelength is just a set of like 8 fixed wavelengths per sample step that get calculated simultaneously, basically.
But the issue is, that only works if the re-emitted wavelength happens to be the same as the input wavelength. Or perhaps it could be bent into the re-emitted wavelength happens to be one other of the wavelengths in the set.
Neither of which is sufficient to render out arbitrary fluorescence, as the wavelength would actually have to change over the course of a path.

Converting to RGB only happens at the very end, when saving a sample in an image, so it doesn’t matter at all here.

Not sure what you mean with re-emitted.You have a emitter source that radiate the material then material change the emission color whatever you programmed your shader.

Fluorescence is re-emission:

  • A goes in
  • A gets absorbed by the material
  • electrons inside get bumped to some higher level
  • electrons fall to some lower level
  • B goes out

This is also why it takes time. But the time for those changes can be extremely short, making it effectively like regular reflection but with a complete change to the spectrum after reflection.

And the issue is, that Hero Wavelengths assume:

  • A goes in
  • A goes out

or

  • B goes in
  • B goes out

but can’t handle

  • A goes in
  • B goes out

The wavelength must stay constant!
A shader that demands the wavelengths change simply would not be supported at all by the usual Hero Wavelength implementation.

Why,i mean all what you have described are basic shader propertys as absorption.?

Listen. If your belief is that the shader is all that matters, then explain to me why getting real caustics has been such a difficult task even though regular reflection or refraction shaders would already very much imply the existence of caustics.

Shaders are not everything. The sampling techniques must actually explore the paths implied by those shaders efficiently. And regular Hero Wavelengths do not sample paths where the wavelength changes at all, let alone efficiently. It simply is not included as an option to sample in the technique.

This sounds odd.What happens with all the other shader which reflecting, absorbing ,scattering, emitting etc.Or as example all the propertys of the principled shader?

Precisely zero of those require changing wavelength across a single path of bouncing light?

Red photon goes in, red photon bounces several times, red photon hits the camera. At no point at all does it magically turn green. It just gets redirected and the intensity might get reduced (for physical materials)

What happens with light that gets absorped and scattered in a volume?You can programm a simple phaseshift to your shader.If i understand you right,then you mean to change the emitted source wavelength?I think this does not happen even in regular Cycles.
If that is the case i dont know.

Volume bounces are no different at all from surface bounces, except that they can happen anywhere within a volume, rather than definitely happening on a fixed surface.
Wavelengths generally stay constant across a path.
Unless you are looking at a Fluorescent path.
Note that a phase shift doesn’t change individual ray color. It may result in interference effects but those are completely unrelated to to Fluorescence. Those happen between different rays, rather than within a single ray. And such phase effects happen with surface bounces too. In particular, they are a big reason why metals look shiny

1 Like

You maybe have seen this papers,might be helpful.