I learned a little more about UDIM.
And also talked a little about the plans to split the Image Texture node.
Now there are such directions:
It seemed to me that the tile meant a z-offset, but it looks like it’s really just an extension of the uv plane…
Well, for the first time in 6 years of using blender (not in production), I have encountered this, now I am learning about it at a fast pace.
In general, for different pixel position methods, it makes more sense to split the node (vector, x/y int, linear index).
If leave the UDIM reading as in the shader, it does not complicate the node so much.
Just as I understood, for geometry nodes, the tile and the frame can have very similar technical implementations, the only question will be in preparing all the image buffers.
Also mentioned, these are not plans at the moment (many other things to do). It’s just that these questions suddenly appeared and I decided to raise them for users. The more I expand the possibilities of images in the nodes of the geometry, it seems more that I’m just reinventing the ways of video editing.
I hope the use cases and designs don’t boil down to how you can post process a bunch of galagram pixel 