Rendered image

Hello, I rendered a 1024*1024 image in Blender and I’d like to get a 3d representation of the data. Is there a way to do that in Blender python? I’m really new to the programming side so some code examples would be great. Thanks.

I don’t know about python, but I’m not experienced with python coding at all, so it may very well be possible.

But maybe you can do something with the geometry nodes raytrace node? I don’t think you can really interact with geometry nodes from python (yet) , buy maybe you can at least get the data that way.

if you already have rendered the image in blender, you can just use the data from the position pass - it is exactly that - the first intersection, encoded in rgb values that can be read as positions in space.
I can only guess what you want to do, but i think the better forum to ask this are the community forums

Let me look into it. Thanks.

How is this used? I know about normal maps or depth maps, but this I haven’t seen used somewhere.

Advanced compositing packages like nuke for example - you can extract a colored point cloud for each image… I myself used this alot in my pipeline involving unity, rhino and blender.
One application might be an spherical image mask - you define a point in 3d space, and ‘select’ all pixels in a given distance - could be done with the z-buffer and the camera matrix, but its much more work - math setup wise…