My requirement is,
I am rendering screenshot with cycles (not inside blender. Using programming cycles).
After the screenshot is rendered, I should be able to draw boundary around the object.
To achieve this,
How to get the boundaries (2d polygons) of every rendered object in 2d space when a camera, width and height are given? An object might have more than 1 polygons. Some objects may not have any polygons as they are not in the rendered image.
You’d transform the object bounding box from world space to raster space. This would be done by transforming every corner point of the bounding box using the scene camera worldtoraster matrix, and then computing the bounding box of those transformed points.
Hello,
I’ve been trying to do just that, the results are not what I expected. How should I apply the worldtoraster matrix?
More specifially, I’m trying to get pixel-space bounding boxes - is that the same as raster space in Cycles?
Raster space is defined the same as in the Renderman and OSL specifications, and could also be called pixel space.
You apply the matrix by calling transform_perspective with that matrix and a coordinate. Note that worldtoraster matrix is only computed as part of scene update, it can’t be used before that.