Thoughts on making Cycles into a spectral renderer

This CUDA error should be fixed now.

That’s right, the addon is not in the repository. My script that packs the builds copies it separately there. You can grab it from my Windows builds, it’s located in release\scripts\addons\import_spectrum_csv.py.

I believe node groups should be used in this case since this setup can be created with existing nodes. With the upcoming everything nodes project it would be possible to ship useful node groups like this with the builds.

4 Likes

Alright, thanks! I’ll do the same for the next macOS builds.

I’d kinda consider it equivalent to the Color Mix node, but I suppose it’s easy enough to have as a fixed node group once that’s supported.

@brecht I’m looking into importance sampling the wavelength and it’s almost working perfectly. I’ve spent a while trying to figure out what’s wrong but I’m not set up very well so can’t step through it. It seems like the very last value in the CDF gets a huge weighting, and I’m not entirely sure why. Otherwise it seems to be behaving. Can you spot anything off here?

int wavelength_cdf_resolution = 1024;
vector<float> wavelength_importance_cdf;

util_cdf_evaluate(
    wavelength_cdf_resolution,
    MIN_WAVELENGTH,
    MAX_WAVELENGTH,
    [&kg](float x) {
      float3 xyz = wavelength_to_xyz(kg, x);
      return (xyz.x + xyz.y + xyz.z;
    },
    wavelength_importance_cdf
  );

And here is the usage. I couldn’t figure out how to utilise lookup_table_read in this case.


float initial_offset = path_state_rng_1D(kg, state, PRNG_WAVELENGTH);
FOR_EACH_CHANNEL(i)
{
  float float_i = 1.0f * i;
  float current_channel_offset = fmod(
    initial_offset + (float_i / CHANNELS_PER_RAY),
    1.0f
  );
  float position_in_cdf = current_channel_offset * wavelength_cdf_resolution;
  int cdf_index = int(position_in_cdf);
  float bias = fmod(position_in_cdf, 1.0f);
  float low_wavelength = wavelength_importance_cdf[cdf_index];
  float high_wavelength = wavelength_importance_cdf[cdf_index + 1];
  float biased_progress = lerp(low_wavelength, high_wavelength, bias);
  state->wavelengths[i] = lerp(MIN_WAVELENGTH, MAX_WAVELENGTH, biased_progress);
}
3 Likes

New build on GraphicAll! Changes:

  • Fixed OCIO crashes
  • Changed the color of spectral sockets in the UI
  • Added “Gaussian Spectrum” node, it’s a spectral version of the Wavelength node
  • Added new “Camera Response Functions” panel to the Render Properties. It allows editing CRF using curves UI similar to the Spectrum Curves node. It’s also possible to load CRFs from presets from the scripts/presets/cycles/camera_response_function directory. There’s one preset as an example.
11 Likes

This is the same as filter importance sampling, so I think it can use the lookup table mechanism and lookup_table_read rather than duplicating that code.

For importance sampling you need the inverted cdf, from util_cdf_inverted. And like lookup_table_read you may need to use wavelength_cdf_resolution - 1 depending on what that value is.

3 Likes

Thanks for that. How do I put this CDF into what seems like a global __lookup_table variable? And how would I find the offset?

1 Like

Check how filter importance sampling does it in film.cpp.

2 Likes

Will do, thanks for the direction :+1:

2 Likes

I love that Camera Response idea though I suspect it’s not gonna work well with an eventual Filmic preset once that’s back in?
Really love the crazy result though. Could be used for look dev and I’d imagine this be particularly useful in some sort of mixed PR/NPR-setting.
Making it easy to have libraries of looks would be really useful.

Spectral Filmic is a big project in and of itself, but it’s being worked on. Custom CMFs will work well with Filmic once it’s back in, I would imagine.

The most common application of this that I can see would be film emulation, more accurate colour blindness emulation (you can essentially just plug in the spectral response function of the channels in and get things looking ‘right’), or, as you say, just playing with it for the sake of playing.

I think if we could find spectral data for a lot of colour film, this could be the reincarnation of the ‘film looks’ Blender used to have.

7 Likes

You maybe interested in Open Film Tools
https://www.hdm-stuttgart.de/open-film-tools/lichtquellen_spektren

2 Likes

@brecht thanks for the guidance, I think I have the importance sampling working now.

An issue has cropped up (separate to the importance sampling) where it seems like the wavelength seed only changes once every 10 samples. Is this something you’re familiar with? At first I thought it must be to do with the dimension we’re using, but it doesn’t seem to be the case; changing the dimension doesn’t seem to have an impact. Other things (film and lens position, time, BSDF u and v etc) seem to be correctly updating every sample.

What other variables are responsible for determining the result of path_state_rng_1D which might be constant for 10 samples at a time?

4 Likes

any preliminary results in terms of speedup or noise quality change?
I’d guess it’s particularly relevant for narrow spectra? Like, probably a good test would be to use different widths of the new gaussian spectral node

I don’t think there is anything that changes every 10 samples. Maybe it’s using the RNG before it has been initialized or something? Maybe the dimension number is invalid somehow, and the Sobol sequence has not been initialized for that dimension? Does it work when you use e.g. the time dimension instead as a test? Does it work with CMJ or PMJ sampler?

@pembem tested this, it only occurs with the Sobol sequence. It’s possible something hasn’t yet been initialised, I guess. I’m not sure what would have changed but that’s a good place to start looking.

I’ve tried using the same dimension as others such as time, but that didn’t seem to impact the results.

Once the core feature is there, I think it’ll make sense to either give the user presets or expose (yet another) spectrum curve in the UI which represents the wavelength sampling importance. That way a user could specify which wavelengths they deem ‘important’ for a render. For now, the main benefit is extending the sampling range to the wider 360-830 without a performance cost. Once I have the sampling working again properly I’ll do a comparison.

The issue only occurs when using Sobol sampling pattern. CMJ and PMJ work perfectly, here’s comparison of the scene rendered with identical settings except for sampling pattern:



Here’s another example, you can see here clearly how color noise pattern changes every tenth sample as described:

Changing dimensions does not help.

1 Like

This is the value of state at the start of my changes at the end of path_state_init

My Linux build finally works :partying_face:

6 Likes