Hi ! I’m using blender to do some research on landscape aesthetics. And the panoramic camera (equirectangular) is great tools to do some assessment. I want to know more about its rendering process. Does it adopt image-based rendering approach, using offscreen rendering creates cube map textures and warping in a post processing step , just like what we usually do in unity3d? or using some other special geometry-based approaches? Thank you very much!
With ray-tracing it’s pretty simple to do these kinds of cameras, no need for cube maps or warping. We just trace rays in the appropriate direction. PBRT has information about this kind of thing:
You guys are amazing, thank you very much！！！