Modifying blender to produce UV-unwrapped renders

Salut!

I’d like to use blender to produce multiview renders of a single UV-unwrapped mesh and I need to match pixels on different renders. To maintain pixel correspondences between shots, I want to produce (in the same UV coordinates) textures of the “final color as seen by the camera under given lighting”. My current approach is to enable UV pass in Cycles and to use the coordinates that it outputs to fill an empty buffer (outside blender) with colors from Combined pass. While this works, the results I obtain thus need to be denoised somehow, interpolated:

Texture baking doesn’t immediately solve my problem since as, if I understand correctly, each pixel of a baked texture is what camera would’ve registered if it looked straight at the pixel, along the surface normal without distortions. I need to account for position of the camera, so that specularities are observed and that pixels invisible to camera are shadowed.

Basically, what I seem to need (in my rather pedestrian view) is, given a sampled point on the output texture, to get corresponding point on a corresponding visible triangle and cast a ray in that direction, from the camera, accounting for its lense/etc. Difference would be that intensities are accumulated in the texture, and texture’s pixels must be all covered if visible. Kind of custom projection.

I don’t claim that such approach is the best to achieve the results I need. And there might be a way to reproduce such behaviour using tools blender already provides but in a rather hackish way. Any “alternative solution” suggestions are welcomed.

So I’m wondering what would it take to implement described behaviour as some sort of a custom Camera or projection type within Cycles? I’m unfamiliar with blender’s codebase

Thanks!