Each rendered pixel to store the colour contribution of each contributing object, each contributing material, and each contributing light

If each rendered pixel could store the colour contribution of each contributing object, each contributing material, and each contributing light. This way, when using cryptomatte to change the appearance of an object or a material, all of the other pixels which were infulenced by that material or object could update accordingly.

Handy if you want the reflections to update correctly, of if you want to change the colour of an object which is behind glass for example.

Additionally, because light information is being stored, we could change the colour and strength of lights in the compositor without having to re-render. We’d just need a list of scene lights down the side of the compositor with strength/colour parameters to tweak. We could even replace which Hdri was used to light the scene by as well as storing the colour contribution of the hdri per pixel, we could also store the pixel location of the hdri which was sampled. This would mean we could find the relationship between the initial pixel value on the hdri and the end contribution to the rendered pixel, and then use that relationship to calculate the new hdri’s colour contribution. It would even mean we could rotate the hdri in comp and have all of the pixels update appropriately. Sort of like real time re-rendering of rendered images without the ability to move objects or lights.

The ability to change the strength and colour of lights in comp is already available in Corona and Vray, possibly arnold and renderman too. As far as I’m aware, Blender could be the first to do object/material. Seems to be setting the trend in a lot of other areas at the moment such as sculpt mode and some of the new edit mode tools.

Hi!

Storing so much information per pixel requires a lot of memory. Storing too much data would also increase rendertime. Memory access is most of the time the bottleneck during rendering. Changing lights in comp is doable and there is already a patch available for Cycles (https://developer.blender.org/T78008).

During path tracing any material in the scene could potentially effects any output pixel what leads to massive memory needs and data structures. Eg number of lights * number of objects * number of materials * needed channels additional images are needed to store the information.

There have been some laboratory studies to such render engines for years, but I don’t expect them to become market standard due to the limited value they have at the end. eg compositing is very flexible, but becomes very slow, what lowers the effectiveness of compositing.

Storing more render data for comp is something to look at, but we should be careful on selecting the right use case and design the data structure based on these needs.

woah, cool, I didn’t know about that patch! Thanks.

I guess it’s just a case of waiting for the hardware to catch up then, a bit like with raytracing when the guy got laughed out of the room for suggesting something so resource expensive :smiley:

One day :slight_smile: Fingers crossed Blender can be the first!