Which is better for geometry, frames or UDIM?

As you know, right now the Image Texture node does not support UDIM, and a frame field.
https://developer.blender.org/T102918

But in terms of implementation, the difference between frames and tiles is not very big. It’s not a graphics card, it’s just a memory read by the processor. So the question is, what has the highest priority?

We need both your opinions and examples showing that this is better. need both your opinions and examples showing that this is better.

1 Like

There is already a frame input on the node. Are you proposing to turn that into a field somehow, and what would that do exactly?

If you want use different images based on texture coordinate, it should use UDIM, that’s what it’s for. That’s already the way it works elsewhere in Blender.

1 Like

I think eventually we’d need both? Frames on a per-tile basis.

Probably part of the bookkeeping could be re-used indeed. I’d think start with what is easiest to support and try to keep re-usability for the other purpose in mind?

Using the frames field is really very useful.
But yes, in shaders it’s just tiles. But if one of the reasons for this is the technical difficulty of loading all the frames into the video card, then it is not here.

You need to clarify what you mean by “frames”. If you just mean regular animation frames, then UDIMs and frames are completely different thing. In theory, you can use UDIMs as an image atlas where you grab a piece of textures using UV coords like done in games, but that is not necessarily why the UDIM UV tiles were initially implemented in CG. UDIMs are generally used for gaining more texture resolution efficiently.

For instance, I am working on an asset that uses 28 8K UDIM tiles. If we did not have UDIMs I would need to use a texture size of 224.000x224.000 pixels image just for that asset.

We need a proper support of UDIMs in any node that uses images. Like Brecht said, this works everywhere else in Blender properly including baking textures.

Yes, I see. Different applications and difficult to interchange.
But if we want to have one thing, we need a good reason for refusing the second.
Since if you do both, then it will be a nightmare.

In theory, the image node in GN should be identical to the image node in the shader nodes or the compositor nodes. We need both the UDIM UVs and the frame sequence support.

Moving towards both adding tiles and having a frame field in the future. And the main problem I put is how to technically control the 4D data.

2 Likes

Yes.
Both please.
Separate coordinates for frames and tiles

Conceptually UDIMS are just a grid of tiles, and each tile is an array of frames. You don’t need fractional coordinates for either of them, so they are just some sparse arrays. I’m probably overlooking something (haha, of course I am!) but conceptually I don’t understand what is so hard? Maybe explaining the difficulties will help you clear any confusion?

It’s not really 4D data either, because the frames and udim tiles would need to be separate inputs.

2 Likes

I learned a little more about UDIM.
And also talked a little about the plans to split the Image Texture node.
Now there are such directions:

It seemed to me that the tile meant a z-offset, but it looks like it’s really just an extension of the uv plane…
Well, for the first time in 6 years of using blender (not in production), I have encountered this, now I am learning about it at a fast pace.

In general, for different pixel position methods, it makes more sense to split the node (vector, x/y int, linear index).
If leave the UDIM reading as in the shader, it does not complicate the node so much.
Just as I understood, for geometry nodes, the tile and the frame can have very similar technical implementations, the only question will be in preparing all the image buffers.

Also mentioned, these are not plans at the moment (many other things to do). It’s just that these questions suddenly appeared and I decided to raise them for users. The more I expand the possibilities of images in the nodes of the geometry, it seems more that I’m just reinventing the ways of video editing.
I hope the use cases and designs don’t boil down to how you can post process a bunch of galagram pixel :joy:

1 Like

I liked the idea of using a UDIM that was composed of captured views from a high res asset. A view dependent baked atlas, sort of like a NeRF. You could apply position data to project the correct texture depending on new camera view and Lerp the coordinates.

1 Like