Thoughts on including Specular Manifold Sampling (Metropolis implementation)

I should seek some infos in the shader system or on the internet to help determine what can be considered a sharp reflection/refraction to detect them during the shader compilation.

My first though was about check Glossy/Glass/Refraction/Transparent/Principle shader (with roughness) but it’s probably the wrong path.

Also, from what I’ve seen so far in the code, it doesn’t look like I can access all of the scene during a path to do something like scene.getAllCausticsObjects(). :thinking:

I think it would work similar to has_surface_emission, just add something like has_sharp_specular to various BSDFs and store that info similar to how it’s done for emission.

Looping over all objects in the kernel indeed is not practical. It also would just be bad for performance. If you want to sample random points on triangles with specular shaders, it should be done by building a distribution similar to emission and then sampling from that.

It may be more practical to start with refractive caustics and straight line initialization for a first prototype, rather than sampling points on objects.

I see how a guiding system would be so useful here. Something like, during the initialization phase, we map every object relative accessibility in some way and then, when the ray hit something we can use that precomputed “map” to find witch object are the most likely to have a huge influence on the current point and send more ray their way.

Just my 2 cents. I’m not entirely sure with the SMS algorithm you’re using. But one thing in RenderMan they do is mark certain objects as ones to start caustic exploration from: https://rmanwiki.pixar.com/display/REN23/Using+Manifold+Walk

It seems like on the reflection / refraction node having a setting for enabling caustics seems to make sense and maybe a similar way to do this.

SMS start from a point that may receive a caustic light contribution and send a bunch of ray to a castics caster (in the original paper).

Problems is that a lot of material use Principal for example and it hide the reflection/refraction causing nodes inside. And it may become too complex to configure if we start to add to it a bunch of boolean for that.

@brecht proposal is to determine during the compilation of the shader if that shader will be a valid caustic caster and then use that as a marker to help the SMS algorithm.

It may also be possible to add a caustics boolean attribute to the object (if the previous idea is too complicated or not always satisfying). It will work in a similar way too, as a marker to help.

But thanks, it may be easier if we could just mark object as receiver/caster to avoid doing a SMS path on each pixel.

1 Like

Just for reference:
This paper has been widely popularized in the Blender community by this video:

It has been pitched a couple of times on Right-Click Select and discussed, among other places, on this forum here:

1 Like

Thanks for linking my thread.
@lukasstockner97 replied to my thread about how SMS seems to be extremely slow compare with regular path tracing

For example, take a look at figure 14 in the paper. It’s an equal-time comparison, and you’ll find that it shows 4000 samples of plain PT vs. 50 or so samples of SMS. That means that each SMS path is almost 100x as expensive as a classic path.
“Yeah, but the image is soo much cleaner at 50 samples, right?” Well, the caustic is. However, if this was e.g. a glass sitting on a table in a full ArchViz scene, the caustic would look great while the remainder of the scene would look, well, like it’s been rendered with 50 samples instead of 4000. Remember, SMS only helps with caustics.

I gave up suggesting SMS after reading about this 4000 samples vs 50 samples situation

On the fig. 14 the 4000 samples where for the glass ball. It’s noisy as hell and a lot of point are either black or white. The unbiased was at 94 (26 for the unbiased) and the result was much cleaner. It was 1000 vs 50 for the swimming pool.

On caustics elements, a lot of samples are lost on the scene (doesn’t find a light). So SMS add an additional computation that require some time per pixel but overall, the scene may need fewer samples to get an impressive result.

I must say that if with 5 minutes you can send 4000 samples but get a noisy result and need ten times that work for a clear-ish result, SMS may be something that can really be helpful.

And if we get some sort of guidance, we may reserve caustics samples to SMS only and send more ray for normal sampling on other surface

2 Likes

You didn’t get what @lukasstockner97 said about:

“Yeah, but the image is soo much cleaner at 50 samples, right?” Well, the caustic is. However, if this was e.g. a glass sitting on a table in a full ArchViz scene, the caustic would look great while the remainder of the scene would look, well, like it’s been rendered with 50 samples instead of 4000. Remember, SMS only helps with caustics.

The paper clips the caustics part of the image to show, it didn’t compare the other parts of the image. Even if it did, the tested scene was not lit with bouced indir light so it was not meant to do that anyway.

EDIT:

The path guiding paper seems to have a kitchen scene for indir light rendering. I am not sure about time though, it seems there is a “training time” going on before rendering, I am not sure if that “training” is a “once and for all” thing or something you need to do to every scene before each render. If it is a “once and for all” thing, it should have more samples if given a full 5 minuests just to render

It may be possible to increase performance by limiting where SMS is used. Like we don’t need to run it on every pixel of the scene

About the path guiding paper, from what I’ve read, some more ray are send to find path during the actual sampling, to help next sampling.

EDIT:

also I think SMS should be an optional setting to activate only if you want it. To avoid doing SMS in project that doesn’t require it.

1 Like

I just read some more about it, still not really sure what it is, but I kind of get the idea that it is a per-render thing, so basically it needs to be done every time we hit render.

There is still a thing buggin me though, here a video on YouTube mentioned the method in that paper:

What does he mean by “online training”? Is this “online” referring to “Internet connection”? I don’t really think it makes much sense so I believe the answer is no, but what does this online mean?

Also, upon searching in DevTalk, I found this:

Well, yes, that comes back to the online guiding topic a bit

Online guiding?

I am confused

something like the following ?


2 Likes

Thanks, that makes sene, I have also suspected the case bacasue of the “offline renderer” vs “realtime renderer” thing. But since I would almost never hear anybody calling UE an online renderer, and when I searched for online rendering I got render farms as results, I thought that might not be the case. But it seems like it is, thanks for that

2 Likes