Specular Manifold Sampling for Cycles Caustics and Speed

Please check out the Specular Manifold Sampling method in this video

It is based on a more complex method called Manifold Exploration technique proposed at 2012, but this time it seems easier to do now with the Specular Manifold Sampling method.

A video that talks more about the detail:
(If the video is not working, please use this link)

Will this be in Cycles? It seems that it is friendly for both biased and unbiased rendering, as well as spectral rendering that Cycles is developing. It sounds better than the previous discussed bidirectional+photon mapping technique to me. What do the Cycles developers think about this?


Link for the paper:

Link to its GitHub:


I’d too love to see what the Cycles developers think of this / advancements in research general (and which of those they’d like to implement).

That being said, here are the reasons for why I wouldn’t put up my hopes too high:

  • Brecht, the Cycles dev, is afaik highly involved with Blender development right now, because he’s one of the lead architects there. If I understand correctly, he doesn’t have much time these days for Cycles
  • This is fresh research. It hasn’t proven itself in more niche renderers and I’m not sure if the reference implementation works on GPUs too. Implementing this would be hard, as there aren’t any more resources than this paper on it. In fact I think Cycles is highly inspired by PBRT, which is a book by a group of computer graphics researchers on how to build a state-of-the-art path tracer.
  • Implementing this into Cycles would be a huge undertaking, that might just turn out to be for nothing when newer research arrives that needs an entirely different architecture.
  • Cycles is supposed to be a production renderer, not a scientific renderer. Assuming the developers were able to implement this - the time might have still been better spent on something else, those caustics aren’t something you need that often, right? Most scenes don’t have any.
  • There are lots of basically ready patches that have not been merged, like blue noise sampling. I’m sure that these well understood ideas, that have made it into other renderers, have a tangible benefit for all users and a working patch will be preferred over such new and experimental ideas.
  • And if there was time to work on something big, that’ll more likely be something like specular rendering, which is something that the newest edition of PBRT does and which some community members are working on implementing
1 Like

Actually reflective caustics are more common than you think. If you look around you, you can find reflective caustics quite often. I can see them beside and inside my mug cup, as well as many other low roughness objects.

Also It seems like caustics would not be the only benefit. The overall speed will increase as well. @ogierm As “enilnacs” wrote in the Blender Artists post:

We need this in Cycles, this would finally allow true and FAST full light transport!
This is already implemented in GPU Optix on their github project!
The sample count of around 128 to 64 samples for FULL CLEAN render convergence is INSANE!
Imagine the speed cycles would have, OMG!
This would bring Cycles roughly some 100 billion lightyears ahead any render engine

I am not sure, you may be right. But another guy named “Zach Hixson” said this in the right-click select page:

Not necessarily. Cycles currently only has 1 sampling method (Maybe 2 if branched path tracing is counted as a method), but most other renderers have multiple. Lux render for example has Bidir/Path, engines, and many different samplers such as sobol and metropolis.

And this guy named “Hadriscus” said in the Blender Artists page:

Not a day passes without me hearing about some new light transport method, and I never see them end up in Cycles. i guess it’s not as easy as plug-and-play… maybe this one will be the exception ? Here’s to hoping

And this guy named “enilnacs” replied:

Yes, you’re right, many things were proposed, but this is the first complete light transport solution that has no real compromises. Also GPU friendly. This is worth implementing.
Yes, it’s incredible. I’m playing with the github stuff now.

I am not sure about this, whether it would be hard to implement should be looked at by the devs first. Because from the sounds of it, it should not be that hard to do.

If what “enilnacs” said as quoted above is true, it should be GPU friendly.

Also in the Blender Artists page, smilebags, who is working on Spectral Cycles, confirms it should work with spectral as well.

Here is the Blender Artists page:


nice, that sounds really good

This program can convert blender scenes to be used on Mitsuba 2, from there you can render with the open source code for the specular manifold.

1 Like

Another reply from Right Click Select Page by “matteo1”:

Indeed. We don’t have good caustics in Cycles because many of its features wouldn’t work with a bidirectional path tracing (i.e. shooting light paths also from the light sources, in addition to from the camera, to help to find all those difficult reflected/refracted paths that create caustics). And bear in mind that all light that passes through any kind or form of glass (such as a closed window), is, strictly speaking, a “caustic”. Cycles simplify those cases, ignoring refraction for incoming light rays, giving results that are somehow not entirely correct. Many other render engines rely on some form of bidirectional path tracing to account for those effects.
To my understanding, this approach is something to be really excited about, because it should be implementable in a path tracer without necessarily requiring any path being cast from the light source. This means that it should be possible to implement it without any fundamental change to Cycles.
This method uses randomly found refracted/reflected direct paths to light sources (that generally appear as fireflies), as seed paths. Once one of those paths is determined, it is able to quickly find a multitude of other valid reflected/refracted paths between the same two points (camera and light source). In the video, while rendering the torus scene, you can notice that some bright areas appear all of a sudden. My guess is that, as the path tracer explores new random paths, and one of those eventually hits the light (normally causing the appearance of a firefly), the algorithm is suddenly able to unlock a whole new bunch of similar ones that are responsible for that specific caustic or bright area…
I’m really excited about this, and I would be really super happy if this landed in Cycles.


All the light that is currently illuminating my room, entering through the glass of my closed window, is, technically speaking, a caustic.

EDIT: There are a lot of fields, in addition to vfx, such as archviz, or product visualization (imagine a commercial of a bottle of perfume, a watch, or whatever chrome shiny product) in which not having accurate caustics is a dealbraker.


Since the Blender foundation has now a little bit more money, why not approaching the Ph.D. student who wrote that paper and offering him a grant to help to implement that in Cycles…?


Here’s another possible candidate for a Cycles light tracing upgrade:

Guys, relax a bit :slight_smile:

I think like 15 different people sent me links to this video over the last two days. While it definitely is an interesting paper, I think the video is waay too optimistic and uncritical.

Yes, from a research standpoint this is a great algorithm. But reading through the threads about it, everyone seems to think that it is the holy grail of rendering that will give you perfect images within seconds and magically solve everything that is wrong with renderers. That is NOT the case.

First of all - this only is about caustics. Nothing else. This will not make any difference to something that is not a caustic. If you currently have caustics turned off in Cycles, this will make nothing faster or less noisy.

That being said, this algorithm is far from being practical for production rendering. Look at all the example scenes - note how they consist pretty much only of objects that cause caustics? That’s because this algorithm works by picking a point on an object and then trying to construct a specular manifold connecting to a light source. That’s easy if you only have caustics-casting objects in your scene, but not so much in a realistic use case.

For example, take a look at figure 14 in the paper. It’s an equal-time comparison, and you’ll find that it shows 4000 samples of plain PT vs. 50 or so samples of SMS. That means that each SMS path is almost 100x as expensive as a classic path.
“Yeah, but the image is soo much cleaner at 50 samples, right?” Well, the caustic is. However, if this was e.g. a glass sitting on a table in a full ArchViz scene, the caustic would look great while the remainder of the scene would look, well, like it’s been rendered with 50 samples instead of 4000. Remember, SMS only helps with caustics.
And on top of that, it seems like some people completely missed the “equal-time” part and now think that every render will look good in 10 seconds because it says “50 samples” and 50 samples are fast with Cycles’ current algorithm…

Even the paper says this:

Determining when to use our method is another important aspect for future investigation. Attempting many connections that are ultimately unsuccessful can consume a large amount of computation.

That is what I describe above - the glass on the table would only cast a noticeable caustic on the table, but how is the algorithm supposed to know to that? So, it has to always try to connect a specular manifold via the glass, which will fail in 99% of the cases.

Also, note that the paper explicitly describes this as a “path sampling technique”, not as a rendering algorithm, because the authors themselves know that this is “only” a building block for future research and not something that is immediately usable.

Building blocks like this are important, but turning them into a practical rendering algorithm is another massive challenge. It’s much more likely that it will first be used for mush more limited applications - for example, I remember reading about some company having custom integrator code for light transport in eyes since caustics etc. are important to get the look just right for VFX. That’s a much more limited and therefore more practical use case, so there’s a good chance that whoever is working on that is taking a close look at this paper currently.

It’s not a question of how hard some algorithm is - I could easily implement a basic form of this, or SPPM, or VCM, or whatever in Cycles in a week or so. The problem is that Cycles is a production renderer, and that comes with expectations.

Everything we include has to work with live preview rendering, GPU rendering, ray visibility options, arbitrary surface materials, arbitrary light materials, volumetrics, DoF and motion blur, tiled rendering, denoising and so on - that’s not even close to the full list. Every single point I mentioned above is a complication/challenge when implementing VCM, for example.

It’s honestly impressive to see just how much attention this one paper got from this video - there’s many more out there showing comparable improvements for some special case, but they didn’t get nearly the same attention.

Don’t get me wrong, the paper is impressive and a great archievement. And I like the YouTube channel in question, but one clear problem is that it’s purely hype and not a pragmatic look at the topic. The amount of times I’ve explained over the last few days why we can’t “just implement this and be the most advanced renderer in the world 100 lightyears ahead of everyone else”…

So, here’s a somewhat lengthy explanation that people will hopefully see.


Lukas, thank you very much for the time you took to share with us your detailed opinion about it.

I am really no expert about the matter, but I couldn’t stop wondering: wouldn’t it be beneficial to have this (or a similar algorithm) standing by the currently available integrator, so that, every time Cycles randomly finds a bright caustic path (such one that would, to my understanding, cause a firefly), it invokes this new algorithm just to explore a set number of neighboring points to said firefly, before giving the floor back to the standard, fast integrator?


In theory something like that is certainly doable - for example, caustic paths discovered through classic PT could be used as seeds for MCMC-like algorithms like the mentioned manifold exploration.

However, once you go into details it starts to get tricky again. For example, how do you pick “neighboring points”? If the perturbation radius is too wide, you waste a lot of effort, and if it is too low you end up with small spots of caustics rather than a smooth result.
Additionally, a lot of the path weighting math in path tracing relies on probability densities of path vertex sampling cancelling out with the geometric terms of the light throughput. If you pick the points based on distance to a seed path, that’s no longer the case, so you have to include that. That’s still doable, but then you start to have the problem that maybe you don’t know the probability distribution of the seed paths - if there is bias in their distribution, the brightness of the caustics might not end up correct unless you account for that. And so on…

I’m not saying that it’s not possible - if you cleanly subdivide the light path space into paths that are handled by regular PR and paths that are handled by whatever caustics logic you have, and assume that PT will give you an unbiased sample of seed paths from the second class of paths, then it might just turn out that all of the tricky stuff cancels out again and it works. Or it might not - it’s not really possible to just know that instinctively, it takes a lot of math and experimentation to figure it out (at least for me).

But even then, that’s just the basic light transport algorithm. You still need to implement it. Let’s say a GPU thread finds a caustic path - it can’t just start doing some special caustic magic, because that blocks all other threads in the work group. If you want to evaluate a lot of paths based on it, you’d need to add it to some sort of global queue, then read that back to the host and schedule a separate kernel just for the caustics handling. And so on…

Sorry for yet another wall of text, and not much substance beyond “well, idk, might work but I can’t tell for sure” - I hope this at least give some insight into why this stuff is not as simple as it seems.


It surely does, and also is nice to read some insight on how complex is the job of the whole pathtracing algorithm(s) (not to mention the job of coders :wink: )

Could the radius be modulated as a gaussian or similar curve to obtain more samples at low angle and then a few wider to just “check if” they are needed. And in case, set the radius bigger accordingly, the reiterate the gaussian curve…?

My fiend and I are planning to tackle the particular topic of implementing this into cycles but it will likely take some time. It wouldn’t hurt to have another integrator which is basically PT + SMS with ability to mark objects as caustics casters, caustics receivers and caustic bouncers. I’m also looking forward to the spectral cycles because this thing is more than real now and might also be implemented in the future and it’s a great addition.


I just read it, LOL 1 year later… :smiley:
But Lukas, ok, i got it, the problem is… we need a caustics solution even if its just a secondary kernel and an extra AOV. That’s why Pixar took VCM at all. To have it for those difficult shots. Of course i can install Luxrender or PRman, but wouldnt it be nice to get proper caustics from Cycles ?
And now comes the thing, this is doable on GPU too, AFAIU… The idea is elegant, lets be honest, so why not.
So again, the issue is not that we want luxury, its just we can’t ATM not even render any (proper) caustics in Cycles. I hope you understand. And like you said, if it takes only 1 Week, why not ? This thing would be faster on GPU than VCM on CPU. And we need only a simple AOV to get that caustic pool going :smiley: The ultimate holy grail of light. :smiley:


Thanks @lukasstockner97 for the in depth explanation.

I have a little experience testing some rendering options and found something that works really well for caustics using a render engine that calculated caustics really fast. What I did was selected the objects that contribute to the caustics and rendered out the image only with those objects with caustics on and denoised. I then rendered it again with caustics off to a better quality. After I combined the caustics image with the 2nd image it looked like they were rendered together. When being able to separate the caustics, to a caustics pass, with only caustics, it worked so good I was able to use the caustics pass on a render done with a completely different engine and have it look great. I guess what I’m saying is if caustics was able to be done by Specular Manifold Sampling or any better simpler method completely separate, even in parallel with gpu using unused cpu cores which I don’t use for renders because gpu is better for non caustics, then combined later it would be way faster than caustics currently in cycles. A separate number of samples and everything could be set for caustics. An option to have the user mark which objects would be caustics objects could also be done so those are only seen in the caustics render. It would also get around problems because it would only be used for caustics on a scene that consisted, from it’s perspective, of only caustics objects. The algorithm also doesn’t need to know “the glass on the table would only cast a noticeable caustic on the table” because the user would say only the glass, table, and this one light are to be used for the caustics calculation. So if having it be part of Cycles would be too difficult have it be separate and combined the 2D images later.

1 Like

Are the devs aware of the upcoming photon traced caustics in Octane? This method looks pretty amazing. https://render.otoy.com/forum/viewtopic.php?f=33&t=78783

The Photon tracing in Octane looks very similar to Photon tracing in LuxCore renderer.

I believe there is some opposition to photon tracing for Cycles due to some limitations it has, but I may be wrong on this.

1 Like

By looking at it it seems that in Octane you have to decide which kernel you want to use, so if you use the photon tracing kernel, you loos the pure path tracer? I’m not sure about what’s happening in the videos.

How would a full room work with that to generate caustics in a glass of water?

Anyways, Photon Cache + Light Tracing is what Lux does, the results are pretty amazing, specially for light tracing, but it cannot solve SDS and then it need photon cache that while it’s cool is slower too, with Lux it’s a matter of balance :slight_smile:

1 Like