The problem is, you would need to include and compile two different kernels for each variation. This increases both compilation time and the size of the Blender package. CUDA and OptiX kernels alone take 107mb of storage when unpacked, and adding two more variations of kernels would triple the size.
But it definitely can be included as a build option.
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuStreamCreate(&cuda_stream_, CU_STREAM_NON_BLOCKING) (C:\blender-git\blender\intern\cycles\device\cuda\queue.cpp:33)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid value in cuCtxDestroy_v2(cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\device_impl.cpp:147)
Most of the lines are repeating so I picked the last few lines
Both are 4 samples with spectral checked. And yeah the X build’s hair on the right does look problematic.
Also it is very obvious in this comparison that the X build have a really strong green tint in the spectral mode as well, wonder why.
I also tried an older Spectral X build that was before the darkening fix:
Hair shader still problematic (32 samples). No tint difference, just slightly darker.
Is this intended? I am starting to think this might be intended, as wavelength importance sampling would mean certain wavelength gets sampled more, therefore it’s expected to see the difference in tint?
The current spectral reconstruction method is not perfect and I think is the reason for those differences. It just wasn’t as noticeable before because of darkening issues. The new spectral reconstruction method should give much better results.
That’s definitely not intended, all emissive materials must look identical, including background. When using importance sampling, if certain wavelengths are selected more often or rarely, they are weighted accordingly and multiplied by corresponding values to ensure correct results.
EDIT: apparently there is a variant that can do even wider gamut spectral upsampling by going into flourescence. Not sure if that’s a thing we want to pursuit immediately but might as well share. It seems to be a cool option for sure
Wide Gamut Spectral Upsampling with Fluorescence
Plus followup to that paper:
Improving Spectral Upsampling with Fluorescence
And another paper seems to even discuss spectral Rendering in both the Offline (Cycles) setting and Real Time so that may also be interesting to look into. Maybe some spectral stuff could, in fact, eventually be supported by Eevee! (Sounds like the spectral realtime engine they have built is currently fairly limited though)
Using Moments to Represent Bounded Signals for Spectral Rendering
Plus a follow-up to the above:
Spectral Rendering with the Bounded MESE and sRGB Data
One of the proposed applications is an improvement on Hero Wavelengths specifically for very spiky spectra.
And at least based on the video they provide as supplemental material, it looks like it wouldn’t even be that difficult to implement. It seems to amount to stochastically selecting multiple wavelengths rather than doing equally spaced ones around a single stochastically picked hero wavelength, and doing the appropriate weighting. I don’t actually know how difficult this would be though
Sorry I haven’t had a chance to reply to these messages. @kram1032 while I can’t see anything in the stack trace which is spectral specific, it could very well be because of one of the changes the branch has. Hopefully it’ll be resolved upstream, if the issue is coming from Cycles-X, otherwise we (probably @pembem22) will look into it.
And regular Cycles X renders fine now, just the spectral branch that still doesn’t let me use GPU
EDIT: BTW just tested the new CleanAux with Prefiltering OIDN in Spectral mode, the new fast mode for prefiltering does not seem to work well with color noise, Accurate mode works so much better in spectral mode.
I have now read up to chapter 4 of that paper and can say it is definitely the most comprehensive summary of all of the spectral upsampling methods I am aware of. It also covers a lot of other related topics in an approachable-as-possible manner; it’s a truly priceless resource for anyone who wants to learn about the topic.
Reading the last chapter in that linked PDF, I came across a quote which made me smile. Although I’m sure they worked hard to ensure this was the case, I think this is a suitable reason to start the transition to a more spectral-aware workflow.
Finally, given that the overhead of rendering spectrally is essentially a rounding error
in the sort of scenes we render, why not use the most accurate colour representation
available?
This is referring to Manuka, Weta Digital’s in-house spectral renderer. The last line in the PDF then says:
The overhead of uplifting from RGB to spectral is negligible compared to shading and
light transport in a typical production scene, so we firmly believe the right question isn’t
“why should we go spectral?”, but “why not?”
About the greenish tint in the spectral result, I don’t know whether it is caused by some bug in wavelength important sampling or the currently not-so-perfect RGB to Spectra conversion, but I tried to match the result in the compositor with the “Difference” Blend mode on. I am not sure whether I am doing this correctly, if not, please tell me where I did it wrong.
I used two scenes in one file, one rendered with Spectral switch on and the other one off. The two scenes have nothing but pure white world background. Then I used the compositor to try matching them. I consciously chose to use divide, so that the result I get can be directly apply to the white level under the color management section. So I got this result:
I tried my best to match them, you can see the output is not pure black, but I think it is close enough. The result is [0.9614, 1.02, 1]. So for now, if I want to get rid of the spectral greenish tint, a temporary workaround is to either do this in the Color Management panel:
Multiplying the channels isn’t quite the correct operation to be doing here but for now it’s probably close enough. If it reduces the difference in simple scenes, it’s useful as an interim method of comparing the two.