Suggestion for reducing noise in cycles(branched path tracing)

I have a suggestion for seriously reducing noise when using branched path tracing in cycles. It involves changing from Monte-Carlo methods to deterministic methods for calculating the direction of indirect rays sent out when a ray bounces. The implementation in cycles should be relatively easy(depending on the code). This method(which I will outline below), has the possibility of getting low-noise renders with as few as 5 diffuse samples per ray bounce.

First of all, the standard in renders is to use Monte-Carlo methods for deciding which direction to send out rays after a ray from the camera strikes a surface(in the world), and for sampling different parts of pixels(for anti-aliasing reasons). My method only changes the direction that the rays are sent out. Now, when a ray is sent out from the camera(or bounces off of something) and the ray strikes an object, several things usually happen:

  • Rays are sent out from the bounce point towards lights to see if the point can see the light.
  • A ray might be reflected or transmitted, depending on the material.
  • Several rays are sent out in random directions to see about how much light the point can see.

The last one is what causes a lot of noise in renders, as is 100% obvious from the diagram below.

As you can see, two rays(sent from adjacent pixels) strike a surface and send out secondary rays to calculate indirect illumination(i.e. not directly from lights). With Monte-Carlo methods, the rays are sent out randomly. This has a very high chance that the rays from one point will strike completely different areas than the rays from the other point, causing them to have very different lighting values. This shows up as grainy noise in the final render. When this happens, the solution is usually to up the number of rays sent out from each point(causing slower render times), but fortunately there is a better way.

Instead of sending out rays randomly, send them out in a deterministic, regular, evenly-spaced pattern. This pattern would be globally-oriented, meaning that whenever two rays strike two points that are close to each other, the rays sent out from the two points follow similar paths. The diagram below shows this to be true:

With this method, the chances of the indirect rays striking the same areas are greatly improved, causing the two points to have similar lighting. How to decide the pattern though? There are undoubtedly many smarter people than me who have come up with clever ways to evenly distribute points over a hemisphere, so I’ll just give a brief suggestion here.

The first ray should point straight up(i.e. along the normal), and the next 4 should point in the four directions of the compass(N, E, S, W) while being angled 45:thermometer:(read: degrees) away from the first ray. The rest would then be distributed procedurally around these 5 points. One idea would be to add “levels”, where each level has several evenly distributed rays that are added to the other rays when that level is reached(that might be too complicated for users though). The pattern would only need to be generated at the start and whenever the user changes the number of indirect rays. The problem of exactly how to orient the pattern to the normal seems fairly easy, but I haven’t figured out the exact solution yet, so I’ll leave that to the more experienced people out there.

This method is not perfect, potentially causing artifacts in certain cases, but for people willing to work around them the speed-up would be huge. I hopefully wait for the implementation. Any criticism, hints or comments welcome. Thank you for your time and have a wonderful day. :slight_smile:


What you’re describing is “Coherent Path Tracing”

The authors acknowledge that “This efficiency gain, however, comes at the cost of structured noise […]. This type of noise is more noticeable than the random noise of SPT (third row of Figure 2)”.

Here’s what that structured noise looks like:


Yep, it’s entirely possible, but it has been considered before. Yes, you do get less noise, but the benefits of monte-carlo sampling is that it will always converge to the correct result, and the noise, while generally higher than other more simplified techniques, is random, and even pleasing to work with. It gives you ‘free’ dithering and no strange lines like in the image above.

I have a feeling scrambling distance is related to this - again, lots of hype, but not always as useful as initially apparent.

1 Like

This is unrelated to convergence, coherent path tracing will converge just like monte carlo ray tracing (and as the authors point out, coherent path tracing is unbiased too).

Cycles is using deterministic sampling already, the default settings are set to using randomized Quasi Monte Carlo (Sobol sequence with Cranley-Patterson rotation).


Sorry, I must have used the wrong terminology. The greater the coherency, the more low-frequency noise/patchiness will be visible. It would indeed resolve to the same result but from my experience with scrambling distance, it tends to look more funky on the way there.

The interesting thing is that while the basic idea of using quasi-random numbers to remove correlation artifatcs is easy to understand and has been in use in computer graphics for a long time now (, it is still an active area of research (

1 Like

It is certainly interesting. I wonder if true randomness has ever been used as a source for RNG, and what impact it had on the results.

It would be terrible for rendering. While I’m not aware of anyone using true random numbers, quasi-random number generators perform worse than stratified sampling:

I guess that makes sense. In the end you want an even distribution of samples across each dimension and random numbers wouldn’t do this very well at all. You want it to be uniform for any number of samples while avoiding correlation. No wonder it is still being researched.

whbat happens if we mix this with dithered sobol?

I read the paper that you linked, and I noticed that they use the same random pattern for the rays, while I am suggesting using a deterministic pattern(i.e. the same one every render). The idea is that it would be useful in complex scenes with very few flat surfaces where artifacts would show up; I expected artifacts to appear in scenes with sharp edges and mirror-flat surfaces.

I really appreciate the information you gave me, I didn’t know about CPT before. :slight_smile:

You’d get dithered sobol.

Let me explain in detail:
When sampling a pixel, Cycles draws numbers from a Sobol sequence. This sequence was precalculated for a single pixel.

If Cycles were to use the exact same sequence, unmodified, for every pixel, you’d see the correlation artifacts that I showed in the image above. An easy way to work around this, is to rotate/shift the sequence by a different, pseudorandom offset for every pixel. This is what an unmodified version of Cycles does.

The dithered Sobol patch replaces the pseudorandom offset with a precalculated blue noise offset. In this application, the blue noise looks more pleasing to the eye than pseudorandom numbers, even though the overall error and variance are not reduced.

The scrambling distance patch reduces the offset by which the Sobol sequence is offset per pixel. As one reduces the offset, more correlation comes back, and at an offset of zero you end up reusing the same Sobol sequence for every pixel again.

Cycles and I would assume every other production renderer out there use only deterministic pseudo-random numbers. As I mentioned in the post above, the Sobol sequence used by Cycles is deterministic, mostly precomputed and baked into the program code itself:

I specifically mean a symmetric, evenly-spaced, regular pattern that has zero randomness in it. Sorry, It’s my fault for being too vague(I should probably edit the post).

That would introduce even stronger correlation artifacts. Look at the sample images in the stochastic sampling paper:

1 Like

Yes, your are right that it would(though not aliasing artifacts since this method doesn’t change the pixel sampling pattern).

My idea would be that it would be useful in some scenes, such as ones that have complex, varied surfaces that would hide the correlation artifacts(since the pattern would be oriented by the normal and the BSDF). Nature scenes are an excellent example of this. Either way, it just depends on in how many scenes the artifacts are highly noticeable