Idea for noise free cycles at low samples

Hi guys,

i have an idea for getting cycles noise free without a denoiser at very low samples (64/128?)

OK, so we render the scene, doesnt matter if tile or progressive, we get to the end of it at lets say 64 samples. Now the dark areas will be noisy. Now while the scene is still in memory, we get the Direct Light Diffuse pass only, and use it as an additional emission for an additive second indirect pass that will add the result to only the Indirect Diffuse AOV, nothing else. Lets say another 64 samples.

EDIT: Obviosuly this is a 2 pass rendering, the second 64passes take into account the diffuse direct (of course together with the texture albedo) as an emission of that surface

Because direct light is always super clean super fast, the result should be, at least for the diffuse pass very clean. The second bounce, and especially because now the surface is illuminated by a big area of the first hit surface. This could be extended to specular or refraction AOVs too, but diffuse GI would cover most problems.

Interior scenes would render with this extremely fast noise free, because in the second pass the walls are emmisive by that amount of energy bounced of course.
There should be also no artifacts, because the rendering is per pixel.


1 Like

Don’t we already have Adaptive Sampling?

you’d need to map the first pass result onto geometry
edit: look at this: Lighting GI Baking Setup - YouTube and this: probes GI - YouTube

This has nothing to do with adaptive sampling. It’s a kind of light cache

1 Like

well no, not really, you already have the depth of the pixel from the zmap or from the first hit of the second render, you only need to add the emission strength of that stored pixel to the first diffuse ray bounce.
GI baking is worse cause it stores indirect lighting and noisy too. this stores only the first ray and adds the energy to the first hit of the second render per pixel.
i hope i explained ok, this should be superfast.

What you’re describing is called “Irradiance Caching”.

It’s not that it’s a bad algorithm per se. Redshift and other renderers do this. However it’s not always faster especially in viewport rendering. And it is explicitly biased where Blender has been adamant on keeping Cycles “unbiased”


Irradiance caching renders in low res and calculates similar to final gathering an diffuse coefficient. Then it renders final and adds that lower res voxels to the equation. While this seems similar its biased because of the low res and limitations of the cache memory. What i mean is similar but still different i think. You render the first samples at full res like in final render. Full composite and AOVs.

When the second render starts and the first ray hits, you “add” at the same time the previous pixels and emit that energy into the scene, but instead of storing all those voxels in the scene, you take the depth of the first hit. If the calculations are done correctly and everyting is done per pixel, then the result should be unbiased because truly per pixel traced (both renders). This could be extended to other AOVs too.
I am aware that due to unidir path searching, some small lights or holes, etc are not found, but the point is abut the big surfaces and the first hit to the second hit. That is the worst regarding noise and makes the biggest time delta.

One of the main points of Cycles (and path tracing in general) is to dramatically simplify the rendering workflow by eliminating the forest of resolution controls. It is probable you won’t gain that much time because you will be busy with test renders as you figure out the optimal combination of values.

Besides that, there’s still a lot of room for optimization, multi-light sampling, path guiding, and MNEE (with the first two being planned for sure) will dramatically speed up renders in many cases.

1 Like

No, i mean that would be implemented into the render kernel. No testing or anything. Fully automatic.

I think you are incorrectly assuming that somehow the direct lighting at the shading point directly visible in a certain pixel is all you need to compute indirect lighting at that same shading point. This is not true, you need to integrate the direct lighting from many other shading points. And then you get to algorithms like irradiance caching and screen space GI, each with their pros and cons.

1 Like

but how would you add the emission to a “pixel”?

OK, now i thought better about it i got the idea half wrong LOL, the AOVs are of course not enough, basically what i mean is after having enough samples, to probe the scene for the indirect diffuse geometry that receives the first hit of any light, or is diffuse. But this goes towards adaptive sampling.

Uffff… the problem is that the rays shot into the scene cant understand big surfaces that reflect light, only the ones that emit. So in any case there has to be a 2 pass rendering.

How difficult would be to tell cycles, look instead of baking to texture, bake diffuse lowres to memory only the direct lights selected or the env map, throw it away and redo it each frame. Cause baking now is static and compicated to textures.

Woooow, wait. What if the idea i had for eevee would solve the first hit emission? The problem is that you dont see what is off screen, so why not render a low res fisheye 360° and then you know what you hit. That would be usable also with deep data ( so that you render behind objects / deep values) or z buffers. Guys look into it:

So this could also mean for very accurate, render (lowres 512px?) multiple (maybe scene based?) fisheyes 360 diffuse at different distances (definable?) radiuses from the camera, like an irradiance map and store it in deep data, then render the scene with that deep emission. Now the pixel thing would work, because there is no more to search for. Of course implying that the cascade of fisheyes is automatically emissive based on the depth from the camera.

PS: till now there was reyes, rasterizer, pathtracing, etc… all of them calculate from the camera, except bidir that adds the lights, now… what if there would be another way to render things ? Not from the camera at all. I mean you would still have the lens, but the mechanics behind, what if we wouldnt need to shoot rays into the scene, but they would come to the lens somehow… automagically :slight_smile: Hmm… I dont know, but what if there is another way… just putting ideas out. :smiley: What if the renderers of the year 2040 are not pathtracers at all ? Hm…

One easy one I use sometimes when making flythrough videos (indoors):
Have one camera in the middle of the scene and render an equirectangular render of the room. Then use that render (with that camera coordinates) as an emitter for 2nd diffuse bounce. I usually get pretty convincing results, with this sort of diy irradiance cache.
I have to say though that lately, with cyclesX, rendertimes are less a problem, even for videos: I can get 5-10 minutes per frame that are good for 60seconds videos I make

I have to say though that lately, with cyclesX, rendertimes are less a problem, even for videos: I can get 5-10 minutes per frame that are good for 60seconds videos I make

Yes, i dont complain. :smiley: This is the craziest speedup in render history i ever saw. Just amazing guys!
And guys, have mercy with me, im a little kung-fu panda :panda_face: that tries to put some ideas forth, maybe a blind chicken :chicken: also finds sometimes a corn :corn: :smiley: You are dev giants. :smiley: LOL

1 Like

MODO has an irradiance caching option and I used to fight a lot with it, because it produces dark smears in corners of interior renders, unless you adjust some hard to comprehend values that depend on scene size and other sample values, etc.

I always ended up turning it off, because it just always sucked.

Light tracing and photon mapping, techniques from the 1990s and earlier.

Yeah I don’t miss these approximations that require the artist to memorize 17 thesis papers for a render to work okay-ish, then totally break down when you start rendering sequences

I remember the options in Vray 2.something were absolute hell, I’d end up using the brute force method, which was basically path tracing

Now I don’t want to bring down enthusiastic folks and their ideas…

NEE (next event estimation), path guiding, and maybe also many light sampling.
These three are papers that might end in cycles sooner or later.
Let’s sit on the riverside and wait…

1 Like