Thoughts on making Cycles into a spectral renderer

Which sRGB though :smiley:

And textures shouldn’t always be sRGB is the thing. Normal maps and greyscale stuff shouldn’t be, right? Those are typically Non-Color, was my understanding so far.


1 Like

I’m away from the computer so can’t give you a proper answer right now, but you’re right - non colour data shouldn’t be interpreted - my guess is that linear might do the job, but worth checking.

1 Like

I probably should have been clearer when writing the configuration and comments.

Currently there are only the bare minimum transforms listed. Blender doesn’t do any family filtering or such, so you get all transforms listed by default.

Float data is data; anything that doesn’t describe three lights / chromaticity / “colour”.

In terms of texture encodings, it’s a bit trickier. Technically, the encoding should describe the texture’s state. In many cases, sRGB is the wrong state, as they frequently were mastered on non-sRGB displays. See below for more nuanced information.

I likely should make two Displays, as this is the goal of the two options.

Display colourimetry describes both the colours of the primary lights and the display’s transfer functions. In a majority of cases, unless folks are on a moderately expensive display, the wager is that it is not an sRGB display. Why? Possibly cost, but that’s speculation.

That is, on commodity sRGB-like hardware, the display has a pure 2.2 power function baked into the decoding hardware. That is, it receives the encoded values and decodes with a power(RGB, 2.2) transfer function to get back to radiometric ratio light output. That 2.2 power function disqualifies it as an sRGB display, as according to specification, a “correct” sRGB display has the two part transfer function described in the specification. This has been validated by recording secretary Jack Holm for those folks seeking confirmation on a less than ideal specification’s language PDF.

For a majority of folks, use the sRGB-like commodity transfer function. If your display has a specific sRGB mode, use sRGB.

Same applies for encoding, as albedo encoding would have an impact on reflected light. The proper decoding will depend on how the image was encoded.

I’ll update the configuration to split the displays into two to make it more clear.

ADDENDUM: I’ve updated the configuration and included the Python generator for those interested. Appreciate testing by anyone here capable. If you can’t build, you can copy paste the config.ocio and the LUTs directory, with contents. The branch is located via this link. Thanks to @kram1032 for the question that led to peeling apart the display classes, as it makes for a much less confusing base to build on top of.

So this is the equivalent of “Non-Color”, then? Could you maybe name it the same as regular Blender if that’s the case? - Because right now I have to keep switching that option as I render the same scene in either version, since they mutually don’t know the other’s option.

Is that what CCTF stands for? And would there be a way of figuring out for sure which version is correct for a given screen? (I fully expect that you’re right in assuming my screen isn’t high end enough, but, like, “just in case” :slight_smile: )

Also, my primary goal was comparability. Is what regular Blender lists as sRGB the same as BT.709 2.2 CCTF Colourspace or is it sRGB Colourspace (which, by the way, now that I read that, you went with British spelling which is inconsistent with Blender’s choices)

PS: Can’t wait for that Spectral Filmic you’ve been teasing~ :slight_smile:

@smilebags can we have a build of that, please?

Are transparent shadows (i.e. filtered through a Transparent BSDF) included in that? Because right now those appear uncolored.



(the gradient in that colored shadow isn’t a bug by the way. I tried out a variant on the surface absorption shader I posted above)

I suppose this is a good opportunity to integrate OCIO’s filename detection.

Specifically, it is named float because data can transport in a number of different encodings according to OCIO. Given spectral is a seismic shift, I figured it would be fine starting from a clean base for the time being.

It’s a good question.

First, CCTF stands for Colour Component Transfer Function. Second, there’s no easy way to determine what display type you have without a piece of hardware to measure the intensity of output. A light meter or colourimeter would be required I think. It’s plausible that a clever use of a DSLR with a raw encoding could work too I suppose.

The “Standard” is the sRGB inverse EOTF. Filmic on the other hand, went with the large numbers and is aimed at a pure 2.2, as most folks likely don’t have a higher end display.

Yeah it’s habit, sorry. Given only two or three folks are using it, didn’t leap out as a huge thing. Queen’s English and all…

If I weren’t such a meathead it would be done by now. I had to shift gears as the original effort it was based on was a fixed, wider gamut. Spectral makes the entire spectral locus the target, so I ended up having to rethink things, and that led to the shorter term goal of a reasonable set of wider-than-BT.709 RGB primaries to get up and running, that also play nicely with spectral effects. It’s a huge kettle of fish, given as you can see from your “WTF PURPLE?!?” tests, drills right into gamut mapping and all sorts of other problems.

As folks have also noticed, the Flying Spaghetti Monster UI isn’t managed. That means that even though the working reference is somewhat close to proper D65 BT.709, when you input values in the RGB picker, they go directly in as reference values. This means that on the way out, they are transformed from, without the to. Hence that numerical discrepancy.

TL;DR: The spectral effort puts the broken bits of Blender front and centre. I’m hoping some gradual development in Blender proper can be added to make this more easy.

Sorry I haven’t had time to work on this lately. Should have another build mid to late next week with the new config from Troy, if not some other improvement.


Why not use my snippet for now? It has the exact same effect, without the code duplication.


I thought that was the idea

My offer to automate builds still stands, just tell me what repository and what branch to build and it’ll update a nightly build on GraphicAll.


I would really appreciate that if you could. The branch that is most likely to be up to date is this one here:

This is where Troy pushes to when he is working and when I work I push to both here and to the branch of the same name on my account’s repo (called Smilebags/blender)

I’d point out that the actual branch is

The goal was to do periodic rebases there, not sure if that would impact the auto-builds.

Thanks for that, I hope the link I copied sends you to the branch. If the auto builds just checkout the head of the branch it should be fine, I would think.

Daily updated branch build from troy’s github is now available here

GPU kernels did not build, so they have not been included for now, this branch has the same issues i fixed earlier, uses [0] rather than .x


I’ll try to get them fixed soon, I appreciate that though.

1 Like

Nice! Thanks for this!

So awesome to see the progress with this! Would be great to test it with GPU rendering.
Here is a quick test i did:

Pretty big difference!
Left sphere is a 100% saturated blue, Monkey is 100% saturated green, Right sphere is 100% saturated red. Left emissive sphere is 100% saturated red, and right emissive sphere is 100% saturated blue. groundplane is mid grey.
A bit of a odd thing i noticed was that the left emissive sphere was blown out white in preview render, but orange in final render. How come?


Without drilling into it, I strongly suspect that’s a gamut clip issue. The base transforms are not nuanced in any manner.

in this one, at first glance, other than the light sources, there’s not much of a difference, but if you directly compare them (flip back and forth between them), you’ll find that the spectral render shows vastly more detail on the Suzannes. It really shows the range of differences to expect: Basically none in gray areas, very little in even saturated areas that aren’t just pure red, green, or blue, but quite substantial improvements in those extremes.

Also it’s not quite so simple as RGB being brighter than Spectral. If you look closely, you’ll find that the light on the bottom plane actually reaches farther in the Spectral version (the outer edge of the part of the image that isn’t black goes farther), despite being darker in the center. - I just can’t ever be sure in how far that’s simply a tone mapping difference. (This seems to be a pure brightness thing after all, so that could be due to tone mapping, right?) (there’s no tonemapping yet so this is nonsense)



GIF comparison:

EDIT: I modified the scene to get rid of much of the direct same-color light by adding this cone:

That pronounced the effects quite a bit. Interestingly, perhaps due to relative darkness, the denoiser struggled more to denoise the RGB version of Green Suzanne’s eyes. You can also see how, on the Suzannes, the colors look quite different, as the diffuse portion of the shader is no longer entirely black for other light sources, and so mixes into the highlights. And the improved overall contrast is also still very much visible.



1 Like

It really is more realistic, there’s no doubt about it. I understand it’s somewhat biased because these renders use solid colors but the difference is there.

1 Like