UDIM GPU memory issue

Hi Guys,

I’m currently testing the UDIM texture import from Lukas.

In my test file I’m trying to use it with a simple character, 200 UDIMs (pretty standard for VFX, our characters are around 100-400 UDIMs usually, 4k exr which is getting converted to dds is some cases)

The only way I seem to get it to work is to switch to cycles cpu render viewport, import them into an empty image texture node, connect it to the shader and it seems to be working fine.

Switching to eevee crashes blender 10/10 times, cycles gpu render would give me CUDA error cuCtxCreate: Out of memory. (my current linux workstation has 64gb ram and 8gb video memory)

Since this doesn’t seem like a bug rather than a limitation, does anyone have some pointers in regards to how to handle issues like these in a production environment?

The number of textures would be waaay higher in a render scene than this simple test with 200 UDIMs.

Cheers,

Dan

considering that a normal PBR metalic roughness has a minimum of three textures a 100 udims are 300 exr(full float?) textures, probable your best solution its to spend around ten or fifteen thousend dolars on quadro rtx 6000/8000 and bridge them to get enough memory to manage that.
Even with that, im curious if OpenGL drivers can handle that amount of textures and memory load, @fclem does opengl has a limitation on the amount of memory that can handle? or its just the hardware limit?
Eevee its opengl so having only an 8gb its simply not enough to load that much, and im very curious about if using the new nvidia brige would work at all . . .

the only solution that i have use to handle amounts of texture memory its to limit the textur size for viewport, that way i can “work” confortably until the proyect its render by the cpu.


But i never worked with something near as heavy as 300 to 600 hundred 4k textures in a single model.
How does other software manage this? does then load by parts unloading others?

NahuelBelich

Thanks for replying. We’d use RTX 6000s on the renderfarm, it’s just the workstations at the company having lower specs. (anything from 1070 to quadro 4000s usually, for most departments)

I’ll try the texture limits, it might be feasible for some parts of the pipeline, but for things like texture painting full res would be required. Mari doesn’t have problems with a huge amount of UDIMs, Maya usually craps itself, Renderers I’ve seen in production (renderman, clarisse, manuka, arnold) can handle them quite well.

Biggest issues would be in terms of Blender when it comes to painting textures an a single asset,or when rendering a compiled scene with let’s say 10 characters in a jungle. 2-300 UDIMs is not rare for a character, just looked at a random rock asset and it had 20, a spaceship could have 500+ easily, etc. Rendering in 4k really did a number in UDIM counts.

edit - tried texture limits, even putting it down to 128 crashes instantly. Only way to make it work is switching to cycles cpu before connecting the textures.

googling a bit there are some limitations from a opengl point of view on the amount of textures that can load
https://www.khronos.org/opengl/wiki/Shader#Resource_limitations
but im not a developer im just a user so may be tricks around this, lets see what real developers add to this subject.

Ye, same here - not a dev, just a guy who tries to push for good softwares that make life easier at companies he works for. :slight_smile:

Would be also interesting to hear from production teams who are using Blender, I’m sure there are more and more of them.

that was fast - ⚙ D6416 Fix T72467: Crash when using many (>64) images in a shader

try to beat that, autodesk… :slight_smile:

2 Likes