Thoughts on making Cycles into a spectral renderer

I’m sorry for the terminology. ‘weighing’ in my post indeed referred to spectral sensitivities.

The industrial liquid paint community would certainly want to use 1 nm CMFs, but since that community has been around since before the era of computer power and stuff needed to be calculated by hand (!) and since paint is a dirty mess anyway, it sufficed to use 10 nm interval CMFs and limited range which brings all kinds of advantages in terms of calculation speed and required storage capacity. In comparison, I think that the RGB community is far worse with about 100 nm intervals which has been good enough for decades too. If storage capacity is a thing in rendering, I would not venture beyond a 10 nm default interval and a 360-780 nm range.

Then again: spectral rendering allows for some spectacular detail & accuracy, so the user should definitely be enabled to go with a 1 nm interval from 360 nm to 830 nm and beyond (the CIE has recommendations on spectral range extension too).

1 Like

The issue with larger wavelength steps in an intermediate format is that things like dispersion might become discernibly incorrect, as frequencies would be shifted to (say) the nearest 10nm, the difference in colour might become apparent in some cases. Then again for a 10x memory savings, maybe people would accept it

1 Like

Oh, you definitely should render at infinitesimal wavelength intervals (i.e.: each ray a unique random (hero) wavelength from a continuous range) and only accumulate in the finite band-width channels. In this manner, your rainbows would still be continuous at any spatial resolution but the resulting image planes would be binned and thereby saving lots of memory.

1 Like

Given the CMFS are of their increment, it really doesn’t matter. You have to remember that there’s a large discrepancy across standard observers, and as we head outwards to the spectral locus itself, there is inherent metamerism.

It’s splitting hairs with little upside. For those interested in Standard Observer variance, this Google Colab has some useful imagery.

In particular, this image is extremely informative, as it diagrams the differences of an updated experimental test between standard observers. Each of those plotted lines represents one of the test subjects, and you can see the disparity:

2 Likes

I asked Dalai about it. He said there’s no point in providing a patch if you feel like you can’t be the one to maintain it, as currently there’s (as with a great many things) no core developer with time to work on it. But if there’s not even a rough version out there, it’ll stay in limbo. Even if it requires improvements before it can be accepted - and maybe other people can help out. And after all, I don’t think this is a part of the engine that will require maintaining until forever - correct me if I’m wrong.

So, the way I see it, should you have the time and the energy to port your patch to 2.8x, that’s the only lead we have that I can see for now.

1 Like

Thanks for looking into it. The problem I have with developing for 2.8 is that my machine doesn’t have a dedicated GPU and the OpenGL version supported on the integrated GPU can’t run 2.8

I could rebase and see whether it builds but couldn’t actually test it, but would need to work with someone to test that it behaves correctly. I’m also pretty time-poor so it might take a while. I imagine the simple implementation wouldn’t require a whole lot of maintenance, though.

1 Like

If you’re on linux, you can try starting 2.80 with the software-gl script in the blender folder, for windows, you’ll have to drop in mesa’s opengl32.dll (prebuild binaries here)

however, realize this is software opengl… it’s not gonna have a supper happy time performance wise…

1 Like

I tried that previously with a different machine. Just the blender UI was rendering at frames per minute, so I feel like buying a GPU or working with someone else on this might be a better option.

1 Like

FYI @sam_vh I’m taking a look into it and seeing whether I can either apply my old diff on current master, or just implement the same changes again, hopefully tidying up some shortcuts I took last time. I’d be happy to be the maintainer of the feature, but I’ll probably need some pretty patient reviewers as I’m not too familiar with C/C++ programming and haven’t looked a great deal into the blender code styleguide.

3 Likes

I can leave my old workstation on with a VPN connection if you’d like a machine to test things out on, Windows or Linux, whatever you need. PM me if you want to set it up, happy to help get the ball rolling.

3 Likes

Thanks @Mantissa, that will likely come in handy, I’ll let you know if/when I need it.

2 Likes

That sounds great! Looking forward to seeing what comes from this.

2 Likes

@brecht I’m running into some correlation problems where it seems like some aspect of the first glossy bounce (or something similar) is corellated with the wavelength being sampled, would this be due to using the first dimension from path_state_rng_1D? If so, how can I find out what the next available dimension to use is?
float wavelength_offset = path_state_rng_1D(kg, state, 1);

3 Likes

You can enable __DEBUG_CORRELATION__ to verify if the issue with correlation in the random numbers, then it does purely random sampling.

The wavelength should probably be the same for the entire path, so you could store it in PathState and initialize it once in path_state_init. For that you can add a PRNG_WAVELENGTH = 5 to enum PathTraceDimension, since that dimension is unused now.

1 Like

I did add the wavelength to the PathState in path_state_init, I’ll try with the 5th dimension and see if that helps. Thanks.

1 Like

What is responsible for calculating the colour when sampling the background, lights and emissive materials? I’ve got most things working but lights and background still don’t seem to respect their colour, I’m sure I’m just missing a conversion somewhere.

2 Likes

As far as I can tell, it is happening in path_radiance_accum_light and path_radiance_accum_total_light but I’m not sure which out of throughput, bsdf_eval->diffuse, and shadow represent the light colour. My initial guess is shadow, but I’m not very confident of that.

1 Like

I realize this isn’t going to be representative yet, as you still have bugs to mash out, but it would be interesting to see what difference it might already make with identical, very saturate colors in otherwise identical scenes. Especially concerning the previously mentioned darkening that’s inevitably happening due to the three primaries based rendering. Like, a side-by-side comparison.

1 Like

I will post one once I have a useful comparison to make, right now any differences are almost certainly due to bugs.

1 Like

Instead of going for random sampling, why don’t you use dithering?

1 Like