Hi, I just saw this blog on Volumetric Raytracing… The soft shadows look great, and he says that you can also use it for raytraced soft reflections and ambient occlusion as well. What do you think of this technique? http://blog.tuxedolabs.com/2018/10/17/from-screen-space-to-voxel-space.html
This is for a voxel renderer, which indeed has the advantage that you can do some things like soft shadows easier. But Blender modeling is mesh based so unless voxel editing tools are added it’s not immediately applicable.
just to satisfy my curiosity, a tool that makes a conversion from mesh to voxel objects, is something very complex to write?
do the blender metabals have something that conceptually approaches voxels?
sorry if I ask these questions as a non-expert mathematician, I need only a short answer
It’s easy to write a simple version, and many times harder to write a general system that integrates well with everything else.
Thanks for the reply.
but now another inevitable question arises…
I’m not sure I understood
if the purpose is the result in rendering, where is the complexity?
do you refer to animation and more?
Suppose you have a scene with smooth surfaces, thin features like hair, and a mix of small and large objects. Converting this to voxels without losing too much detail is difficult, and likely very slow and memory intensive compared to keeping the meshes. Especially animated scenes would be inefficient.
So you could try to make that more efficient, or have a more complex system with code for both voxels and polygons and integration between them.
From my ignorant perspective if I had to imagine an operating mechanism is to have two separated levels, one of classic cycles rendering
and one of voxel raytracing from which to extract all that information where it is faster and then somehow to compose the two scenes to achieve a compromise, which is a decent quality at low rendering times
long ago I was fascinated by this video