Adjust Z Pass Range Automatically to Get Depth Image

Hi, developers. I am trying to generate depth (z) image for some research purpose, but when I adjust values in Compositor node, I cannot find the min and max depth in current scene in order to map z to 0-1 full range.

In code below, I marked two places that I don’t know where to get variables from context to: have rendered image lightest area to be the most closed to camera, and darkest to be the farthest for any model.

Here’s my section of code cited from post from Stack Overflow.

# Set up rendering of depth map:
bpy.context.scene.use_nodes = True
tree = bpy.context.scene.node_tree
links = tree.links

# clear default nodes
for n in tree.nodes:
    tree.nodes.remove(n)

# create input render layer node
rl = tree.nodes.new('CompositorNodeRLayers')

map = tree.nodes.new(type="CompositorNodeMapRange")
# Size is chosen kind of arbitrarily, try out until you're satisfied with resulting depth map.
map.inputs[1].default_value = 0.6           ##### Min value here
map.inputs[2].default_value = 1.65          ##### Max value here
map.inputs[3].default_value = 0.0
map.inputs[4].default_value = 1.0

links.new(rl.outputs[2], map.inputs[0])

invert = tree.nodes.new(type="CompositorNodeInvert")
links.new(map.outputs[0], invert.inputs[1])

# The viewer can come in handy for inspecting the results in the GUI
depthViewer = tree.nodes.new(type="CompositorNodeViewer")
links.new(invert.outputs[0], depthViewer.inputs[0])
# Use alpha from input.
links.new(rl.outputs[1], depthViewer.inputs[1])

# create a file output node and set the path
fileOutput = tree.nodes.new(type="CompositorNodeOutputFile")
fileOutput.base_path = out_z_path
links.new(invert.outputs[0], fileOutput.inputs[0])

bpy.ops.render.render()
1 Like

Depth values in the render pass are relative to the near/far clipping planes in the camera. - so 0 = the near clip plane and 1= the far clip plane (after normalization) I don’t know for sure what the data relationship is, though, so this may be wrong or oversimplified. You can probably dig around in the source code to get an exact answer, or run some tests with a simple scene and several camera settings.

Anyways, from the Blender Manual:

Z

Distance to any visible surfaces.

Note

The Z pass only uses one sample. When depth values need to be blended in case of motion blur or Depth of Field, use the mist pass.

Mist

Distance to visible surfaces, mapped to the 0.0-1.0 range. When enabled, settings are in World tab. This pass can be used in compositing to fade out objects that are farther away.

So I guess that the depth pass is not normalized and is mapped 1:1 to the clipping planes, and if you want properly normalized values, use the mist pass at cost of greater number of samples.

Thank you! Sorry for late reply. I did investigate mist pass but it seems to blur edges and background. I found a simple Normalize node before passing to Vector Range node may solve the issue.

1 Like

I was looking into how to use the z-depth pass. found a couple interesting tutorials. This one by Max Puliero is awesome. I was thinking of making a simple addon.

He basically uses 2 empties to control that min and max value using a map range node

Hi:

I just wonder if you succeeded in retrieving the actual depth from a rendered image? I loaded the render result in matlab, converting it to a gray scale matrix yet the gray scale value is not in proportion to its exact distance/depth.