I’m playing with writing a toy rendering engine as a learning exercise and came to ask about how Cycles implements a few things: notably image sampling and prefiltering.
I’ve been reading up on texture sampling and it seems that image prefiltering is a common/required part of getting a good result (not sure if it is regarding convergence speed or memory usage) out of image-textured surfaces. Given that Cycles casts multiple rays per pixel, I would have expected that this prefiltering (determining the texture-space ray-differential of one pixel to optimise image sampling) would not be required.
If such a step is implemented in Cycles, what are the pitfalls of naive image sampling without such a texture-space ray differential heuristic, and what visual difference does it result in?
Texture filtering is useful even when you’re supersampling. One reason being that the sampling theorem tells you that a sharp edge in a texture has infinite frequency, but a finite number of rays can only reproduce finite frequencies without aliasing.
Another reason to prefilter is that mip mapping allows you use a texture on demand system, which means that you can render almost any number and size textures in a fairly modest, fixed memory footprint. For film production, texture assets can go into the hundreds of GBs, making loading all textures into RAM impossible. For GPU rendering, such a feature also allows rendering much larger scenes than VRAM can fit (see for example ProRender and Redshift).
If you want to play with such a feature in Cycles, I have implemented in for the CPU side:
Thank you for your insightful reply.
I gather from your statement that there are some significant advantages to mip mapping textures even with supersampling, but that it isn’t implemented in Cycles by default at the moment.
How significant is the difference in aliasing? I imagine the effect of that will also become negligible as sample count increases.
I am slowly making my way through that book - what a wonderful resource.