UV mapping limitation vs UVW (affine vs perspective mapping)

This kind of distorsion of textures with irregular/trapeze quad faces makes low poly work and texture painting unnecesary difficult.
I found that in some situations it affects also the texture baking.
Subdividing with simple method suggested by some as a workaround is not practical either>

As I understand the affine interpolation used in Blender gives a better performance but perspective corrected interpolation is more precise and would prevent this type of skewing for a negligible cost with current hardware. Also mapping is calculated in Blender using just U and V coordinates and to use a perspective corrected interpolation we would need UVW coordinates.

I think users should have at least the option to chose between the methods.

This issue has been reported also on blender.comunity: https://blender.community/c/rightclickselect/9Ngbbc/


Are there any examples of real world CG applications using the “correct” method? I am asking because I am trying to wrap my head around the maths of this problem. I don’t see how this could work without some sort of quad-awareness. In the “Affine” example in your image, try looking at just one of the two triangles, then imagine the other triangle being on one of the other two edges instead of where it is now, and think about how the distortion would have to change to fit your expectation. It appears any neighboring triangles need to be taken into consideration, i.e. this problem is obscenely non-trivial to solve, and the trapezoid situation is merely a very illustrative use case.


Thanks, even if I don’t understand the math involved, I realize that it could be too complex to solve for little gain in only limited use cases. And there are enough workarounds if you are aware about it. I was wrongly assuming that maya or max where better, but it’s not the case.