Thoughts on making Cycles into a spectral renderer

This is the wrong thinking. Completely. It’s not complex to have folks understand that there is literally three lights, and you don’t know what they are. If you have a developer need goofy types after having explained that, they need to stop developing.

Using types might seem like a good idea, but it always yields to metadata. As someone who has been down this rabbit hole for a long time, I’ll say it again: it can’t work. If you don’t believe me, perhaps one of the wise developers who frequent this thread can speak to how complex the idea of metadata is for untracked alpha states in software that doesn’t have a ground truth. There’s two states, and metadata never works there.

It could work in extremely limited cases where your colorimetric points can be enumerated, but in a pipeline where the reference working space, your inputs, your outputs, and a gabillion potential input buffers from various sources including a plethora of camera encodings are all variable to the pipeline, it is a fool’s errand.

If everyone in this thread can communicate and read and comprehend the idea, it’s not expecting too much for a person developing upon a complex digital content creation pipeline to learn some rudimentary concepts.

Again, everything works just fine right now. The point of failure is the developers, and then the culture that doesn’t see how busted their approach is.

Spectral needs a solid culture of folks like the ones in this thread. Anything less can’t likely work, no matter how hard the architecture tries.

1 Like

I know you said proprietary, but do you think you could give a rough overview of the kinds of types you are using to make it all work? Not sure how far you can go but if there already is a solid design, might as well be inspired by it.

You know you aren’t likely gonna make them listen to you any more with that kind attitude, right?

This looks like a pretty awesome way to deal with colour transforms… I’m not sure if OCIO is already being used elsewhere in Blender but I can start to see how this could remove the need for any hardcoded colour spaces. We’d still need something to deal directly with the rendering calculations, but if OCIO can also handle custom spectral response curves for different ‘virtual sensors’, it makes it a very versatile and simpler system.

// Get the global OpenColorIO config
// This will auto-initialize (using $OCIO) on first use
OCIO::ConstConfigRcPtr config = OCIO::GetCurrentConfig();

// Get the processor corresponding to this transform.
OCIO::ConstProcessorRcPtr processor = config->getProcessor(OCIO::ROLE_COMPOSITING_LOG,
                                                               OCIO::ROLE_SCENE_LINEAR);

// Wrap the image in a light-weight ImageDescription
OCIO::PackedImageDesc img(imageData, w, h, 4);

// Apply the color transformation (in place)
processor->apply(img);
2 Likes

My example was incomplete - I think few color spaces would need to be hard coded, if any. Rather, you’d need ‘color_scene_linear’, ‘color_device_linear’ etc types. Their primaries would be defined at run time. The important part would be that developers cannot accidentally use different types in the same calculation without an explicit conversion.

Now, that still leaves room for errors, because one must still use the appropriate type and the appropriate conversion - that is still left to the developer. Still, it would prevent some accidental mixups and it would it make clear in what color space a piece of code is operating in.

3 Likes

Taking these last two notions of @StefanW and @smilebags together, one could end up with something that is akin to the proprietary implementation I mentioned before. It is indeed important to carry the colorspace definition along with the color data (duh) and that is indeed metadata. But important to see is that rgb to spectral conversion by sampling using the colorspace definition removes the colorspace from the (now-spectral) data flow. So during spectral-Cycles-pathtracing, the colorspace is void (pun intended). When, after Cycles is done, the spectral to rgb conversion is done, the void is filled by the output target dependent colorspace definition (If OCIO always requires a valid target colorspace, then the previously discussed equal-energy E could function as that void during the spectral data flow I guess - but you have discussed that at length already I’ve seen).

1 Like

The only place I can see a colour space needing to be ‘hard coded’ is the input data to spectral upsampling. We need an XYZ coordinate to upsample to spectral, then it goes through the rendering pipeline as spectral data. This could come from colour in any colour space, either from images or a ‘working space’ for the colour picker.

Then a camera response curve and a destination colour space are needed (which should be change-able at render time ideally). The display transform can be handled automagically from there.

Only one point in the entire pipeline needs to be in a known colour space. :exploding_head:

Scroll up. You’ll see that XYZ is already available via the OCIO role alias.

The whole idea of upsampling is fundamentally flawed in that it purposefully constricts your gamut, which in turn reduces spectral effects, and worse, limits the rendered output.

For example, how do you upsample BT.2020, which is a bare minimum gamut size if choosing to stick with an RGB based renderer? Answer: You can’t because it’s spectral already, albeit chromaticity defined. Reusing components while likely problematic, is the only means to make this less of a gargantuan change, which it already likely is.

The solution is to start forward thinking on UI. For example, is there a means to convert the colour wheel as it currently exists to spectral? Possibly via mapping the spectral locus around the perimeter in an incremented wavelength manner.

The only issue with this is that those chromaticity coordinates need to be defined somewhere. That’s a tremendous body of work given the information isn’t always available in the manner one requires. OCIO doesn’t yet offer metadata to glean this from a configuration, so it leads down a path of writing another CMS from the ground up, and encoding all of the data. There might be a way to make this work, but you’ve done extensive work with Blender’s code base, and it helps to frame the required energy as virtually impossible. Hero wavelength rendering on the other hand, should work with a minimal amount of fixing and repairing, and simply carry on treating the planes as radiometric.

It’s difficult from the historical vantage of making it useful in the now.

That is not exclusive to this solution. This you would need to do for any correct implementation.

I think Brecht’s initial thought is to firewall the spectral, which makes plenty of good sense. So nothing changes, and nothing is required to change with the exception being some potential modifications to existing UI.

You know the code base better than most. In theory, aside from the minor breakages in misbehaviour, it is already hero wavelength ready if the planes are simply treated as radiometric.

It already works.

They key is education.

I think I still have not made myself clear (I’m typing on my phone and have no plans for sitting at the computer during my time off). I would not do any functional changes to the code. ‘Color_scene_linear’ would still be a float[3], nothing more. So would ‘color_display_linear’. The change would be semantic, with the goal of making it apparent to developers what they’re working with. In my eyes, this is part of education. When we want to teach developers that not all color representations are the same, maybe we should simply remove the word “color” without any further qualification from the code base.

2 Likes

This is already exposed via the XYZ role.

There is a whole colour management system already available within Blender. Developers still hard code.

I should clarify here as a little bird who could be taken as authoritative pointed out that the statement is completely false regarding Manuka and Hero Wavelength.

The original Hero Wavelength paper was written by WETA peeps. Said little bird also stated that Pixar experimented with three channel approaches to upgrade their pipeline “the easy way”, and found it “to not work really well” as compared to using four wavelengths with respect to noise. The efficiency of the Monte Carlo estimator goes down significantly as a result. The last two sentences are paraphrased, and hopefully said little bird doesn’t think I mangled up their intended communication.

This makes sense. It seems getting better colour management throughout the system is going to be an important first step, which allows us to make the decision regarding how many wavelengths to use in hero wavelength sampling later more flexibly.

I was thinking about the noise problem, and was thinking you could potentially get significant benefits by weighing the wavelengths by their perceived brightness rather than wasting samples on wavelengths which have very little influence on the image.

Interesting next step, I think. But first I need to get the basic implementation sorted.

We could use an actual class for C++, and a simple typedef float4 SpectralColor; for OpenCL and CUDA. That way we still automatically get compile errors when there is an assignment or multiplication without proper conversion.

Having the conversion as an explicit function call is not a bad thing I think. It’s not a cheap conversion and it should be clear when it happens.

1 Like

Depending on the method we go with I think it is rather cheap, a lookup into an array and a few float3 multiplies, but I agree, I don’t think there’s much downside to converting explicitly

I don’t think the output buffer should be changed to XYZ. Currently it’s defined to be in the scene linear color space, which by default is linear Rec709, but when using a different OpenColorIO configuration it can be something different.

I don’t see what the advantage of using XYZ here would be. I think code would be simpler and likely more efficient if we only have to convert between scene linear and spectral.

I don’t know what this means exactly, colors are used throughout the code and have to be, it’s not in one place. I think mainly you need to change throughput and PathRadiance (in particular, all the members that store a color) to some spectral color type.

The basic thing to make spectral first is basically the multiplications between throughput, and the output of closures like BSDFs and emission. Then you can indeed integrate it deeper so that e.g. a BSDF can output a spectral color directly for things like dispersion.

Yes, in my experience it’s worth using importance sampling like that for the wavelength, rather than just uniformly distributing samples in some visible wavelength range.

Okay, that was a misconception on my part. I had thought the output buffer was hard coded rec.709. If it is using OCIO already this is unnecessary. Are the other passes such as denoising and light path passes also using the same configuration?

With this understanding I agree.

Once doing this change, the places which take a colour (such as diffuse and glossy colour, emission colour etc) need to be converted from tristimulus into a spectral colour, but there are a lot of places which this occurs and how it is done seems to differ depending on something. I’m not really sure what needs to change, but right now making such a change (throughput to spectral colour) makes things explode all over the place. Maybe it is just my lack of familiarity with the code causing this friction.

In the spectral system, throughput would already be a spectral colour (if my understanding is correct) and the output of a closure represents the percentage of light which follows the sampled path for each (RGB) channel.

I would only need to convert the output of the closure, then multiply that result with throughput, correct?