Thoughts on making Cycles into a spectral renderer

Theoretically, if someone wanted a slow real time spectral renderer written from scratch to be integrated into Blender, it could be done. But it wouldn’t be efficient or useful, so really the answer is no.

1 Like

perhaps a more “simulated” and “low-resolution” compromise could be satisfied

There’s no benefit. Eevee isn’t meant for high quality rendering, it is meant to be fast. Spectral rendering isn’t even needed for most scenes.

If you think it is important, you can contribute to Eevee yourself, but I think you’ll find it isn’t worth the effort.

I would not be able to create code anyway.

But in dreams I fantasized and imagined something simulated and fast in eevee,
And probably it will remain something only in my dreams :grin:

This is why there are offline and online renderers.

Okay I’m definitely in this too deep. Green (other colours both appear white) point lights seem to colour things differently depending on their orientation to the face. While I feel most of the core implementation has been done, there are a few things which are just way beyond me at this stage.

I will have to take a more fundamental look at Cycles and how it works in order to resolve this one, unfortunately I doubt I’ll be able to make much more progress for a while.

It might help if you posted your code, someone might be able to spot the bug quickly.

Okay, I’ll see if I can generate a set of changes from my git commits. Thanks.

Here is the full diff from 2.8 branch commit 534009098ea094aa1f3b5298101e5d9e0a3cb1dc
I’m guessing that 1. I’ve missed a few spots to get colours to react to the wavelengths of the ray and 2. I’ve probably done the wrong thing to some bits of code, causing the strange point light direction-dependent colour.

Turns out the diff was too long so it is here:
https://pastebin.com/ykcwjWLN

1 Like

I’ve started from scratch with getting colours working on ShaderClosures again but I can’t seem to locate what Principled’s Metallic component is using, since it doesn’t seem to react to colour as the Glossy BSDF does.

Where should I be looking to convert the sc->weight which is utilised by principled’s metallic BSDF?

You should be converting the product of the closure weight and BSDF evaluation result. So for example:

// wrong
float3 wavelength_intensities = rec709_to_wavelength_intensities(sc->weight, wavelengths);
bsdf_eval_accum(result_eval, sc->type, eval * wavelength_intensities, 1.0f);

// right
float3 wavelength_intensities = rec709_to_wavelength_intensities(eval * sc->weight, wavelengths);
bsdf_eval_accum(result_eval, wavelength_intensities, 1.0f);

Otherwise the tinting from the metallic BSDF in eval is not converted properly.

Thank you very much for that tip, that seems like it might solve a few big things.

I’ve started again and gone through each function and analysed its relation to other functions to try to get a bigger picture understanding, it has helped a lot.

Everything apart from emission and glass seems to be behaving correctly. Glass’ primary reflection is being coloured by the glass’ colour, which isn’t correct.

most of the terms such as BSDF, transmittance, emission, or sensor responsivity have spectral equivalents and depend on wavelength. Modelling this wavelength dependency as closely as possible to physical reality results in much improved fidelity, as well as better importance sampling.

That’s something I didn’t expect to read. I’'ll have a read and see if I can determine how that is the case.

Found on page 20 of https://jo.dreggn.org/path-tracing-in-production/2017/part1.pdf

Page 20

The wavelength has to be chosen carefully, and we further employ path reuse and stratification to reduce colour noise (the hero wavelength scheme, as detailed by Wilkie et al.[2014]).

Page 22

Wilkie et al.[2014] do this in combination with efficient path reuse: the path construction is still performed with one main wavelength, and a set of 3 stratified wavelengths are evaluated alongside with it. The final contribution is weighted using multiple importance sampling (MIS), resulting in a much lower variance picture.
The evaluation of the PDF and wavelength-dependent BSDF can be performed in SSE, evaluating four wavelengths in four lanes in one instruction.
Note that this method requires precise computation of PDFs (that is, a stochastically evaluated or
approximate PDF may lead to problems)… More advanced MIS techniques share the requirement on consistent PDF evaluation, so enforcing this on all our sampling techniques actually resolved a few headaches when experimenting with new path construction algorithms.

Seems like as long as the PDF is precise, MIS can work well with spectral techniques. Not sure how, but it is good to know that it is possible.

I’ve started to get back into this and am almost there in terms of porting everything to behave correctly, but can’t find a few things:
Where light primitives are sampled and the colour determined (I have emission BSDF working but lamps don’t).
Why the Glass primary reflection is still coloured by the BSDF’s colour input.

@brecht, do you know why these might be the case? I would have thought applying your feedback below would have solved this but it didn’t seem to work.

See kernel_emission.h for where emission is evaluated. For lights specifically direct_emissive_eval is the important one. In many cases it uses a constant color, in others the shader is evaluated.

I’m not sure what you mean by the Glass primary reflection being colored by the BSDF’s color input. The Glass BSDF color is supposed to affect the reflection, absorption is handled as a volume effect.

Hey @smilebags, I’m in the process of discussing colour management improvements with @troy_s and @JeroenBakker, and the topic of spectral rendering has come up. Great to see that you seem to have made quite some progress on this already.

Any recent news on this? I would like to bring the topic of spectral rendering under the attention of the Render & Cycles project, and I can’t find any existing content of it on the bug tracker.

Hi @sam_vh, great to hear you’re looking in to this. Short update is that while I did get it working for the most part I wasn’t intending on trying to get it merged at this point, it was more of an investigation into its achievability. TLDR; I believe it is achievable and has benefits

I have since built a toy rendering engine (on paper it ‘works’ but performance hasn’t been considered at all) in the browser. That engine has the whole spectral rendering part working soundly so I would say I have a reasonable understanding of the topic, but less so in Blender-specific advice.
I’m more than happy to have a more in-depth discussion with you about it.

2 Likes

I didn’t put anything on the bug tracker although I can get a diff from the version of 2.79 I was working from if looking at the code would help.

What ultimately led me to stop working on this is that while an ‘in place’ replacement of an arbitrary RGB model to a spectral model in the rendering engine is possible, there is limited benefit when the user is not able to interface with lights and materials at the spectrum level. The reason for this is that ‘upsampling’ an RGB triplet to a spectrum lends itself to very flat and ultimately ‘boring’ spectra.

It seems like by looking at the first part of this thread, that exposing spectra to the user is a big no-no currently, so without some shifts in mindset or priorities, I’m not sure whether people would be that ‘wow’-ed by the results.

On the other hand, even boring spectra allow you to handle colour properly, so maybe an invisible implementation (not exposed to the user at all) is a good stepping stone to a full spectral engine, including the material system.
Again, happy to talk more about this if you would like

4 Likes