Layered Textures Design Feedback

That is pretty cool… maybe nitpicking at this stage but I would suggest being able to separately choose height and width of the texture instead to sticking to squares. than maybe Preview res could be like a division or something 1/2, 1/4, 1/8?

edit: also .tga from personal experience has a better separated Alpha map and as far as I know not a lot of artists in the game industry use .png except maybe for UI. For example exporting a PSD without a properly authored Alpha channel to PNG can cause quite a lot of issues because it can cause artifacts in the color channel where the alpha resides. This is especially apparent when creating textures for Premultiplied Alpha effects. But, I might be mistaken that this is not the case when exporting from Blender. Maybe it’s just whenever I see “.png” it makes the hairs on the back of my neck stand on end because of headaches they caused during production.

2 Likes

Not a friend of PNG too. But It looks like PNG was just used as an example. I do not see anything that will limit us to PNG.

For technical reasons end use (game engines such as Unreal) textures have to be square powers of two, and given that UV’s are square there’s little reason I can see to do anything else. If you have non integral size individual textures then pack into an atlas. If different sizes are allowed though - why not - then please have a ‘lock chain’ button to simply lock them together, and ideally with integral steps of power of two

Rectangular textures (512x1024, etc) are less common these days, but they still can be used in old game engines. Plus blender already supports them in other places (texture painting, etc), I don’t see the reason to exclude them.

That’s not really true for modern GPUs and API’s, but it was the case in the past and can be for legacy reasons/HW.

Unreal requires square powers of two, don’t know the specifics but in tech talks they’ve cited technical reasons for this. Don’t know about other engines, regardless you still have the square UV’s.

ad. Power of two or not

Power of two textures are still relevant. Because of texture sampling and mipmaping.

When GPU wanna sample texture during render, it need to load it, usually from VRAM where it sits. But believe or not VRAM is slow compared to GPU cores, it takes few dozen of gpu cycles to get requested data from vram, so instead of taking it one by one, GPU load chunks of texture to GPU caches (L3/2/1) to have near instant access to that data.

It is as revelant as it was in the past

When object have huge texture (like 4k) and object is small on screen and there are no mipmaps, then loading small chunk of texture wont benefit much, because next pixel that GPU is working on, requires totally different section of texture, that is not in cache and GPU have to wait till it comes from vram (cache miss).
But when there are mipmaps GPU will load that lower resolution texture LOD where texture chunk will have neighbor textures present in cache so it will render faster.

Textures with Mipmaps should have dimensions with power of two, but it does not have to be rectangular. If you have non power of two dimensions and want to use mips You will lose data or waste memory.

3 Likes

exactly, Unity supports the use of textures 512x1024 for example. Will fit nicely in memory, and no data loss while mip mapping. I have made some textures for a 2.5D game background that mainly consisted of wide background elements. It benefitted us there to be able to have a 2048x1024 texture resolution to fit everything on for one level background. But as @DrM is suggesting a lock chain would be great.

@DrM As for UVing using texture aspect ratio is something I used in Maya. Internally of course it’s just squares, but visually you can UV using texture aspect ratio.

That’s the first I hear that POT textures aren’t relevant any more. Every game engine I’ve been using is still scaling any texture to the next power of two if it’s anything else. Even for my non-realtime stuff I try to stick to power of two textures as far as possible just in case I ever need to adapt it to anything realtime (and out of habit).

Do you have a reference where its becoming obsolote in game engines nowadays? I’d really love to see this if true. :astonished:

Or do you specifically mean rectangular power of two?
In which case I’d still say it’s not really legacy at this point, either. Trim Sheets for example. Certainly don’t need to be square all the time.

One question, why is there a division line between Normal and AO?

Also do you envision that it will create a new texture based on the Token names? So this will end up with:

mycharacter_base_color.png (rgba)
mycharacter_roughmetalao.png (rgb)
mycharacter_normal_map.png (rgb)

Or are the textures exported seperately using and than packed seperately?

I never said it was obsolete, I said

in response to

italicized bold emphasis mine.

1 Like

Ah I see. I got a little confused by naming it ‘legacy’. I think it’s still a very relevant way to create textures, today. But I agree - it should never be a hard limit. Blender should neither dictate or assume whether the user needs square or non-square textures as a result.

2 Likes

Sorry if it was discussed already here, but I was wondering if this texturing system could be used for creating generic seamless textures. Or is this being designed only for baking unique maps for a specific mesh?

While unsure feedbacks are still being taken at this point, I have some thoughts regarding how layered textures are expected to be integrated into shader editor (materials).

The proposed Texture Channels node seem rather high-level and I don’t know if it is offering enough control to resolve some of our current limitations in EEVEE:

  • How are Texture sampled?
  • What if we need to sample it multiple times?
  • How do we more effectively reuse image units and samplers?

I would much prefer a setup that involves an “Image Object node” and “Texture Sampler node”. If you have used any node-based shader system, this is by far the most common design, as it mapped much better to underlying shader.

To make it more appealing to non-technical users, I propose the “Image Object node” doesn’t have to be limited by 4 output channels. It can retain all the object-oriented / photo-editor-file feel that the “Texture Channels node” tries to capture: output as many textures as you would like!

But we should explicitly sample these textures via a “Texture Sampler node”, just like Image Texture node of today, only difference is: former take 2 inputs, a Texture input from “Image Object node”, and a UV vector; image unit and sampler should be shared by default.

Yes this is lower level than Texture Channels node, but it gave us proper control and potential for optimization, which will be important when it comes to making things game engine ready (the whole point of baking and texture packing).

Making the Image datablock available as a socket to be linked is planned, though it’s independent of this project. However I would be careful bothering users with implementations details like image units and texture samplers. Reusing those can be done without users having to explicitly set that up. The texture and shading system in Blender is also not only aimed at game engines.

This is off-topic but one of my problem with “Image Texture node” will very likely happen with “Texture Channel node”: that it attempts to encapsulate how texture is sampled. Its Box projected option, for example, is only correct for BaseColor, not Normal. My only option was to rebuild a proper triplanar shader (which easily run into sampler limit for OpenGL).

As far as I know, Eevee will already not use any more image units or samplers if you use 3 image texture nodes with the same settings, compared to 1 image texture node. And if it doesn’t, that’s easy to optimize automatically rather than requiring the user to care about it.

My experience is unfortunately contrary, though I agree it should be solvable with or without a dedicated sampler node. (but just to clarify, in my mind, the Texture Sampler node is just an Image Texture node with the Texture input exposed.)

Q: How would you address the UI usability problem that people have to setup multiple Texture Channel node in order to sample them multiple times?

This looks like you muted all 3 nodes, so that image would not be used at all. Anyway, if just turning the image into a socket is enough that’s already planned.

Since the texture would be baked to an image datablock that you can use, I’m not sure which additional usability problem there is to solve.

Based on this image, the sampling took place within the Texture Channels node?

Will there be Image type outputs from Texture Channels? Because I think sampling should take place in the update Image Texture node, eg. Texture Channel → Image Texture → BSDF Shader.