Yes the approach we use is actually very solid. This explanation was simply as an analogy. The actual wavelengths are ‘evenly spaced’ in the sampling domain. But that sampling domain is not linear with respect to wavelength. It gives much more emphasis to the perceptually influential wavelengths (specifically, the metric is the sum of the wavelengths XYZ). This enables importance sampling of wavelength while maintaining bias-free ground truth/converged result.
I thought this was only done to pick the hero wavelength, and then the other n-1 are sampled with uniform steps (in terms of the numeric value of the wavelength)
Interestingly this would actually introduce bias, but the idea is correct. Hopefully in the future we will be able to introduce more fine-grained sampling optimisations like basing the importance on the reflectance spectrum of the first bounce (though I expect this might go against the preprocessing engineering value/cornerstone that helps cycles to be as interactive as it is).
Come to think of it, I’m no longer confident it would introduce bias, but it definitely wouldn’t help sampling variation. I agree, it is interesting to see what possibilities there are building on the core of spectral sampling. I feel the tooling around a spectral workflow is a much larger task than the core computational changes, but also much more valuable to the user.
So what are the things left to do before submitting the differential revision for code review?
So OSL checked, MIS on GPU checked. I guess the next will be fixing CUDA (which I assume to be the reason for the CUDA failed situation I experience) and passes?
Is there some other things besides these, like Optix support? Is there a complete list anywhere? I would like to know the “road map” for the initial merge. It’s kind of unclear to me regarding what exactly will be included, for instance if I am not mistaken the spectral nodes will not be included. Not sure what else.
The problem is, you would need to include and compile two different kernels for each variation. This increases both compilation time and the size of the Blender package. CUDA and OptiX kernels alone take 107mb of storage when unpacked, and adding two more variations of kernels would triple the size.
But it definitely can be included as a build option.
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuStreamCreate(&cuda_stream_, CU_STREAM_NON_BLOCKING) (C:\blender-git\blender\intern\cycles\device\cuda\queue.cpp:33)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid context in cuCtxPopCurrent(NULL) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:31)
Invalid value in cuCtxPushCurrent(device->cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\util.cpp:26)
System is out of GPU and shared host memory
Invalid value in cuCtxDestroy_v2(cuContext) (C:\blender-git\blender\intern\cycles\device\cuda\device_impl.cpp:147)
Most of the lines are repeating so I picked the last few lines
Both are 4 samples with spectral checked. And yeah the X build’s hair on the right does look problematic.
Also it is very obvious in this comparison that the X build have a really strong green tint in the spectral mode as well, wonder why.
I also tried an older Spectral X build that was before the darkening fix:
Hair shader still problematic (32 samples). No tint difference, just slightly darker.
Is this intended? I am starting to think this might be intended, as wavelength importance sampling would mean certain wavelength gets sampled more, therefore it’s expected to see the difference in tint?
The current spectral reconstruction method is not perfect and I think is the reason for those differences. It just wasn’t as noticeable before because of darkening issues. The new spectral reconstruction method should give much better results.
That’s definitely not intended, all emissive materials must look identical, including background. When using importance sampling, if certain wavelengths are selected more often or rarely, they are weighted accordingly and multiplied by corresponding values to ensure correct results.
EDIT: apparently there is a variant that can do even wider gamut spectral upsampling by going into flourescence. Not sure if that’s a thing we want to pursuit immediately but might as well share. It seems to be a cool option for sure
Wide Gamut Spectral Upsampling with Fluorescence
Plus followup to that paper:
Improving Spectral Upsampling with Fluorescence
And another paper seems to even discuss spectral Rendering in both the Offline (Cycles) setting and Real Time so that may also be interesting to look into. Maybe some spectral stuff could, in fact, eventually be supported by Eevee! (Sounds like the spectral realtime engine they have built is currently fairly limited though)
Using Moments to Represent Bounded Signals for Spectral Rendering
Plus a follow-up to the above:
Spectral Rendering with the Bounded MESE and sRGB Data
One of the proposed applications is an improvement on Hero Wavelengths specifically for very spiky spectra.
And at least based on the video they provide as supplemental material, it looks like it wouldn’t even be that difficult to implement. It seems to amount to stochastically selecting multiple wavelengths rather than doing equally spaced ones around a single stochastically picked hero wavelength, and doing the appropriate weighting. I don’t actually know how difficult this would be though
Sorry I haven’t had a chance to reply to these messages. @kram1032 while I can’t see anything in the stack trace which is spectral specific, it could very well be because of one of the changes the branch has. Hopefully it’ll be resolved upstream, if the issue is coming from Cycles-X, otherwise we (probably @pembem22) will look into it.
And regular Cycles X renders fine now, just the spectral branch that still doesn’t let me use GPU
EDIT: BTW just tested the new CleanAux with Prefiltering OIDN in Spectral mode, the new fast mode for prefiltering does not seem to work well with color noise, Accurate mode works so much better in spectral mode.