Commenting cycles tile rendering, adaptive sampling and render analysis

Hey @brecht this is the thread I told you in the Optix one.

So to continue the conversation about Adaptive Sampling, If I recall correctly in one conversation I had with @StefanW I asked him about the possibility to let the whole picture to complete up to one sampling point, let’s say 100 samples, analyse the picture and let the render to start all over again, he told me that the way Cycles works (and please correct me if I understood it in the wrong way) is that Cycles starts a render, so tiles start working, but once Cycles has completed the render there is no way to re-start the rendering of some tiles (or all tiles), so there is not way to analyze the picture at once to know where we may need more sampling and where the sample is fully complete.

What denoise does (or what it seems to do) is that it waits until it has enough tiles rendered to analyze the surrounding ones and denoise the tile.

Another thing that it seems it’s not possible right now is to stop rendering one tile and dynamically divide that tile in multiple tiles to leverage free threads, so for example if there is a thread that takes a lot of computing once all the other threads finished their jobs and there are not more threads to be rendered, the last tile could stop rendering at that point, divide the task in as many threads as there are available and continue the render of that tile with all the threads instead of just one.

For adaptive sampling you may not need the entire image, just the local neighborhood just like denoising. If the entire image is needed, it’s possible to make things work that way at the cost of extra memory usage or overhead going saving/loading things to/from a disk cache.

For work scheduling, my plan is to allow devices to render many small tiles at once for better work distribution. For CPU rendering it would also be possible to add code for multiple cores to work on the same tile, distributing the individual pixels between them. I don’t think dividing tiles into smaller tiles is the right strategy.

1 Like

Actually, this sounds like similar idea that I was mentioning to Sergey a while back. I think he called this “work stealing”? That is, the idea of getting down to the last tile, or when the number of tiles left to render gets less than the number of idle cores, you start to split up the remaining tile(s) again and again, so that you keep putting them to use.

It sounded like it shouldn’t be terribly difficult to do, either, and would be a welcome speed up to avoid having idle CPU/GPU. At least, this is what it sounded like you were suggesting.

Yes, but in any case, you need to be able to re-start the rendering of such tile, you are right, may not be a need for the whole image, but you need some tiles to have finished their work to analyse those tiles and then re-start the tile rendering, like doing several passes per tile.

For that, you know better what could be the best way to tackle it, I just give an idea that I’ve seen used but you know better for sure.
What @troubled said is also another way of looking at it, to be sure that there is always a tile to be rendered, with a limit in tile size at some point

This is an absolute must in my opinion. Adaptive sampling based on neighbouring pixels from the direct/indirect passes.

Perhaps the bounces could also be adaptively increased per pixel if the noise level is analysed on a per pass basis rather than on the beauty/combined pass. i.e if the noise level of the diffuse direct pass is not reducing by an acceptable amount relative to the number of samples (for example an interior scene where it’s hard to hit a light source) then the bounces for that pixel could be increased? If the result still doesn’t improve by after an additional number of samples at the higher bounce value, then this pixel could be discarded as impossible and then at the end of the render be assigned a value which is the average of the surrounding pixels.

Also, discarding the last tile, and then dividing it into the number of available threads and restarting it would be amazing.


I’ve always wondered as well if it would be possible for the area a ray samples to be adaptive based on the distance the ray has travelled. So for example if a glossy ray travels 100 metres, it could return an average value of a 100 metre radius from the point hit orthagraphically based on the ray direction, whereas if it travelled 1 metre before hitting anything it could average out the value of 1 metre’s worth of data, and then limit the direction of secondary rays to exclude areas already sampled.

if we talk about adaptive sampling would be nice if it would take color managment lut to consideration… Because the noise map will look tottaly diffrent if we render in linear or srgb or log.