Thoughts on including Specular Manifold Sampling (Metropolis implementation)

The paper seems to propose two methods for initialization: sampling points on all specular surfaces or taking BSDF samples to find hit points. That’s similiar to how emissive surfaces are already sampled, and imagine methods like MIS or light trees could apply.

In real world scenes most surfaces often have some specular component, so it’s not clear that sampling across all specular surfaces is going to be efficient. I imagine BSDF sampling or users manually tagging some objects could be more practical. Even then you have some type of quadratic complexity in the number of light sources x the number of specular surfaces.

Path guiding could be a good complement, guiding the sample directions for initialization and then using SMS to refine them.

Gathering a list of shapes that lead to caustics as you mention would also go in the direction of learning/guiding. To avoid double counting paths, you’d need to be able to identify and exclude those paths in the first iteration, or use a similar strategy like path guiding where you either throw away the first iteration or adaptively weight it with following iterations based on variance.

I personally think that the most efficient “first prototype” would be to let the user mark an object as a caster o the algorithm only need to get them from the scene. But, as part of Cycle design goal, it may be interesting to find a way for Cycle to automatically find objects that can be used.

Checking the source code I see that shaders have a bunch of boolean that help quickly detect which type of shader it is. Is it possible, without polluting the system, to just add one that can be determined based on the reflective/refractive element ?

To avoid huge calculation when the scene is complex, it may be interesting to simplify some part of SMS. After completing a path (single or multi-bounces) instead of linking it to 1 light we may instead like it to every light source before deleting those who aren’t visible. Then weighting each light based on their distance from the end of that path before doing the final SMS calculation.

Really look like we first need some guide tools for path calculation. It may be a requirement to avoid duplication or polluting cycle with “special case” code. It would be great to implement the guiding code as a tool that can be used either by the “normal” path tracer and be SMS (or maybe other type of algorithm that may come in the future).

EDIT: thanks for your imput and answer, it’s really encouraging :smiley:

1 Like

Could this work with some sort of “combined threshold” parameter that takes into account roughness of reflection/refraction, size of emitter and strenght of emission? While in a specified range activate the caustic sampling (photon emission?) below the threshold fade it to zero

Hi @brecht,

Just chiming in, in case that helps: Mitsuba 2 uses the 3-clause BSD license. The SMS implementation is released under the same license.

I don’t know for now.
SMS will do additional sampling that will give you caustic (biased or unbiased method may be possible in the same implementation). A read your thread about photon emission and it look like it require a lot of work to get it working with the path tracer as it’s a completely other technique. And it’s probably not animation friendly.

Here the unbiased one isn’t too (well that may depend on some customizable settings) but the biased method (with or without additional constraints) is supposed to be stable and animation friendly.

Just to be sure it’s clear, my initial plan was to get help with that SMS implementation but to rework it from scratch as it’s not similar on how cycle handle a render path. It’s more for the estimator and the manifold walk that the code is really useful (I don’t have the required level to understand it 100% right and implement the big mathematics in the algorithm).

1 Like

It’s indeed possible to add more flags to detect when a shader has (sharp) reflection/refraction.

I should seek some infos in the shader system or on the internet to help determine what can be considered a sharp reflection/refraction to detect them during the shader compilation.

My first though was about check Glossy/Glass/Refraction/Transparent/Principle shader (with roughness) but it’s probably the wrong path.

Also, from what I’ve seen so far in the code, it doesn’t look like I can access all of the scene during a path to do something like scene.getAllCausticsObjects(). :thinking:

I think it would work similar to has_surface_emission, just add something like has_sharp_specular to various BSDFs and store that info similar to how it’s done for emission.

Looping over all objects in the kernel indeed is not practical. It also would just be bad for performance. If you want to sample random points on triangles with specular shaders, it should be done by building a distribution similar to emission and then sampling from that.

It may be more practical to start with refractive caustics and straight line initialization for a first prototype, rather than sampling points on objects.

I see how a guiding system would be so useful here. Something like, during the initialization phase, we map every object relative accessibility in some way and then, when the ray hit something we can use that precomputed “map” to find witch object are the most likely to have a huge influence on the current point and send more ray their way.

Just my 2 cents. I’m not entirely sure with the SMS algorithm you’re using. But one thing in RenderMan they do is mark certain objects as ones to start caustic exploration from: https://rmanwiki.pixar.com/display/REN23/Using+Manifold+Walk

It seems like on the reflection / refraction node having a setting for enabling caustics seems to make sense and maybe a similar way to do this.

SMS start from a point that may receive a caustic light contribution and send a bunch of ray to a castics caster (in the original paper).

Problems is that a lot of material use Principal for example and it hide the reflection/refraction causing nodes inside. And it may become too complex to configure if we start to add to it a bunch of boolean for that.

@brecht proposal is to determine during the compilation of the shader if that shader will be a valid caustic caster and then use that as a marker to help the SMS algorithm.

It may also be possible to add a caustics boolean attribute to the object (if the previous idea is too complicated or not always satisfying). It will work in a similar way too, as a marker to help.

But thanks, it may be easier if we could just mark object as receiver/caster to avoid doing a SMS path on each pixel.

1 Like

Just for reference:
This paper has been widely popularized in the Blender community by this video:

It has been pitched a couple of times on Right-Click Select and discussed, among other places, on this forum here:

1 Like

Thanks for linking my thread.
@lukasstockner97 replied to my thread about how SMS seems to be extremely slow compare with regular path tracing

For example, take a look at figure 14 in the paper. It’s an equal-time comparison, and you’ll find that it shows 4000 samples of plain PT vs. 50 or so samples of SMS. That means that each SMS path is almost 100x as expensive as a classic path.
“Yeah, but the image is soo much cleaner at 50 samples, right?” Well, the caustic is. However, if this was e.g. a glass sitting on a table in a full ArchViz scene, the caustic would look great while the remainder of the scene would look, well, like it’s been rendered with 50 samples instead of 4000. Remember, SMS only helps with caustics.

I gave up suggesting SMS after reading about this 4000 samples vs 50 samples situation

On the fig. 14 the 4000 samples where for the glass ball. It’s noisy as hell and a lot of point are either black or white. The unbiased was at 94 (26 for the unbiased) and the result was much cleaner. It was 1000 vs 50 for the swimming pool.

On caustics elements, a lot of samples are lost on the scene (doesn’t find a light). So SMS add an additional computation that require some time per pixel but overall, the scene may need fewer samples to get an impressive result.

I must say that if with 5 minutes you can send 4000 samples but get a noisy result and need ten times that work for a clear-ish result, SMS may be something that can really be helpful.

And if we get some sort of guidance, we may reserve caustics samples to SMS only and send more ray for normal sampling on other surface

2 Likes

You didn’t get what @lukasstockner97 said about:

“Yeah, but the image is soo much cleaner at 50 samples, right?” Well, the caustic is. However, if this was e.g. a glass sitting on a table in a full ArchViz scene, the caustic would look great while the remainder of the scene would look, well, like it’s been rendered with 50 samples instead of 4000. Remember, SMS only helps with caustics.

The paper clips the caustics part of the image to show, it didn’t compare the other parts of the image. Even if it did, the tested scene was not lit with bouced indir light so it was not meant to do that anyway.

EDIT:

The path guiding paper seems to have a kitchen scene for indir light rendering. I am not sure about time though, it seems there is a “training time” going on before rendering, I am not sure if that “training” is a “once and for all” thing or something you need to do to every scene before each render. If it is a “once and for all” thing, it should have more samples if given a full 5 minuests just to render

It may be possible to increase performance by limiting where SMS is used. Like we don’t need to run it on every pixel of the scene

About the path guiding paper, from what I’ve read, some more ray are send to find path during the actual sampling, to help next sampling.

EDIT:

also I think SMS should be an optional setting to activate only if you want it. To avoid doing SMS in project that doesn’t require it.

1 Like

I just read some more about it, still not really sure what it is, but I kind of get the idea that it is a per-render thing, so basically it needs to be done every time we hit render.

There is still a thing buggin me though, here a video on YouTube mentioned the method in that paper:

What does he mean by “online training”? Is this “online” referring to “Internet connection”? I don’t really think it makes much sense so I believe the answer is no, but what does this online mean?

Also, upon searching in DevTalk, I found this:

Well, yes, that comes back to the online guiding topic a bit

Online guiding?

I am confused

something like the following ?


2 Likes

Thanks, that makes sene, I have also suspected the case bacasue of the “offline renderer” vs “realtime renderer” thing. But since I would almost never hear anybody calling UE an online renderer, and when I searched for online rendering I got render farms as results, I thought that might not be the case. But it seems like it is, thanks for that

2 Likes