Layered Textures Design Feedback

@Stimes, note that in the proposal you would not need to open a separate texture nodes editor. Rather you would be able to press Tab to enter the texture and edit its contents in the shader node editor, similar to a shader node group.

The reason for dedicated textures nodes is not implementation, but design. It would be interesting to see an alternative design fully based on shader nodes. Such a design would need to address a higher level UI, baking workflow, sharing of baked images between multiple meshes and materials, textures for geometry nodes and brushes, etc. I know that’s a big ask, but all the pieces must fit together somehow.

1 Like

Hi all, that was a long thread to read, I hope I understood the main topic correctly.
My feedback so far is that I like the initial design very much and It’s actually almost exactly how I imagined a layered texture baking system years ago when I started using Blender.

I agree with @LudvikKoutny and @jonlampel , my feedback would be more or less the same.

I’m trying hard to quote people that already expressed their opinion to avoid repeating concepts already explained, but I have 7 chrome tabs open on the same thread as bookmarks and still can’t keep track of who said what. So I’ll try to explain some ideas about the baking workflow I had In mind.

First of all I’d like to conceptually separate (at least workflow-wise) different types of baked textures:

1) output baked textures: The final output baked textures that can be used outside blender or reused in blender (eg: texture set of a character to be sent to game engine).

2) cached baked textures: Any texture result baked at any point of the texture node graph that need to be baked at all to be processed (eg. input to a blur node). This can be seen as a generic cache or checkpoint node, like it was mentioned in a geo-nodes related blog post.

3) auxiliary baked textures: Any texture baked that is used to guide procedural generated effects (eg. AO, cavity, position, normal ecc.)

How the output baked textures are produced?
I imagine a node separate from the texture node, just like compositor output. The output node takes care of all the baking settings like resolution, format, path, names etc.
About the output domain (Texture, Attribute, Volume) The first thing that i can think of is that it can be a setting of the output node, eg: write to file, write to mesh attribute, write to packed datablock, write to volume, write to volume vdb file ecc.

Where do these output baked textures live?
By default in the output folder specified by the user, outside the blend file. But nothing prevents users to pack them as any other texture.

How the cached baked textures are produced?
I can see two scenarios:

a) A standalone “cache” or “checkpoint” node that can serve two purposes, avoiding heavy calculations when not needed before it, and being mandatory before any node that needs a baked raster texture such as “blur” .
Any “cache” node should have a “auto-update” checkbox to update it at every change in the graph before (when possible without compromising general responsiveness) and an “update” button to manually update it. In any case, there should be a centralized panel, bar or workspace with an “update all” button, to update (say, bake) all the “cache” nodes in the scene at once. I’ll talk about that in detail later.

b)The “blur” node and similar “need-a-bake” nodes, include the caching function itself, so the user doesn’t need to manually put a “cache” node before. The “need-a-bake” nodes, have the same “auto update” checkbox and the manual “update” button, and they get updated as well when the global “update all” button is pressed. This scenario doesn’t exclude the existence of a general “cache” node discussed before to be used to improve performance of course.

In any case, if the “cache” is not up-to date due to changes upstream, they should show a big warning badge (just like geo-nodes with missing data) and there should be a warning badge to be visible from a centralized area (again, I will explain it later). Plus if the user tries to bake output textures and any related “cache” node is not up to date, it should be warned as well (pop-up and/or info area).

Where do the cache baked textures live?
To me it makes more sense if these textures are packed in the file, to avoid cluttering the filesystem at the expense of the file size. But nothing prevents to allow a setting to specify where these cached textures are stored externally if the user wants to store them externally.

How auxiliary baked textures are produced?
As far as I can think, these are strictly related to the mesh or to a material that is specific to that mesh (or group of meshes that share the material, and have uvs packed in the same space, not overlapped).
I can see the same output node as the “output baked textures” to bake these kind of textures, but maybe they can be packed into the blend file by default, and have a collection input to determine the high resolution mesh and use suffixes as already suggested.

Where do the auxiliary baked textures live?
As already said, packed in the file by default. Specifically as a texture datablock in the material of the “low” model, or a new datablock tied to the mesh data (like uvs, shapekeys ecc).

In the case of the node in the material, the users can explicitly plug the node output in the procedural filters that require them.

If a datablock tied to the mesh is used, an input node that gives access to the mesh datablock can be used.

Centralized area for baking

A very important concept that I’d like to see is a centralized area to have an overview of all the bake nodes in the scenes.
To me, a new dedicated bake tab in the properties panel (as already suggested before) could be nice, I vote for a cooking pot icon :slight_smile: :stew:

I’ have not yet a clear Idea of how the panel should look. But I imagine it having at least:

  • a button to update all the “need-a-bake” nodes at once

  • a list of all the output nodes present in the scene, with buttons to mark them “enabled” or “disabled” for bake like render layers

  • a button to bake all the “output bake nodes” marked “enabled” at once.

  • global resolution setting that is stored at scene level, but a mechanism to override it per-output node when needed, since as I’ve stated before, I imagine every output nodes to have control on resolution. On that regard, maybe every node could choose if use scene resolution or its own resolution.

  • an area to get access to all the output paths and output file names in one place

Let me know what you think. I’m pretty sure I’m overseeing a lot of things, but I hope this post will be useful as inspiration for more insights.

2 Likes

@RiccardoBancone thanks for the ideas.

It leaves me with mostly the same questions I had about ideas from @LudvikKoutny and @jonlampel though, which are unanswered still.

In your proposal, I think the answer to “Where do baked output/cached/auxiliary textures live?” is incomplete. It’s not just “stored externally” or “packed in the .blend file”. What’s more interesting is, which datablock are they stored on or linked to? Image, Texture, Material, Mesh, Object, Scene, … ?

For example if you have a cache node inside texture nodes, is that cache shared between all instance of those texture nodes? Is it specialized for every instance? Which datablock is the cache stored on or linked to? How does this affect use cases like baking multiple meshes and materials into one image texture?

For reference, the answer to that question in my design is: they are all together in one Image datablock, which is linked from a shader node in a Material datablock. This Image datablock maybe be linked by multiple Material datablocks for the purpose of baking multiple meshes and materials to one image.

5 Likes

Thank you for reading through!

I’m not entirely clear on this part. Why can’t it have inputs?

While potentially wasteful if used improperly, the suggestion was to have a different bake for each domain or instance of the texture datablock, and have the texture datablock return the appropriate bake wherever it is used. If a matching bake is not found it would know to bake a new one. The texture datablock could then be used everywhere as if it was a procedural texture, even if it’s a baked image under the hood. The bakes would essentially act as a cache for each use of the datablock, but could also be found by the user for external/other use. Is that possible or is 1 bake per datablock a hard limitation?

Also note that texture datablocks needs input texture maps for AO, curvature, etc. Would those be set up through a (Bake Input?) node, or be a part of the texture datablock in some other way?

I would have assumed that those would be plugged in as user customizable inputs to the texture datablock. For example, one texture datablock could be used for generating the AO map and then a second texture datablock could be used to create procedural grime based on it. These two would be chained together in the shader editor to get the final result.

With your design, a solution would be wrapping that procedural texture datablock into two baked texture datablocks, one for each object. More setup work for that use case but possible.

True, or using the same texture datablock in two separate materials. Either way it’s some setup but I can’t think of a way around that.

I’m guessing the use case you have in mind here is baking different variations of a texture for the same mesh? Because you’re all saving it in the same folder with only a different suffix, and presumably all the other bake settings like image resolution and channel packing would be identical. Or do you have a broader use case in mind, that might for example also involve different image resolutions for each instance, or more flexible differences in naming than a suffix?

Yes! But perhaps the settings like image resolution could be inputs on the bake output node, so they could be adjusted via the datablock inputs if desired.

I’m not keen on name pattern matching in nodes, because it doesn’t play well with instancing, library linking and dependency graphs in Blender.

That makes sense to me, it probably doesn’t matter to the user how they get matched!

There are also settings like cage extrusion and max ray distance, which you might want to tweak per mesh and so would need to be stored per mesh too.

True, though that’s already something we can’t tweak per mesh when baking all at once. I haven’t found it to be an issue, but a solution here would be nice. It sounds doable with nodes though if we can get which object is currently being baked.

How would your design handle the use case where you want to bake different materials and texture datablocks into a single image texture?

That is a really important use case I missed. I think it could be done inside the texture datablock with a node that outputs an object or collection of objects and then another node to get the material passes for the object that’s currently being evaluated. But if those materials include that same texture datablock it would create an infinite loop… is that edge enough case to just display a warning and a purple bake result?

Would you have to ensure the Bake Layer node has matching file paths, channel packing, margin, … in all the texture datablocks involved? And then Blender would detect matching file paths and bake things together based on that? Maybe you’d use a texture node group to keep things in sync, so you’d don’t get e;g. different channel packing settings in each.

I was imagining that there could be inputs for all of those on the bake output node, but if nothing is connected they could use global settings. Seems like the best of both worlds? There could also be a global ‘bake texture datablocks’ button next to those global settings that could be used if auto-baking was slowing things down.

Edit:

For reference, the answer to that question in my design is: they are all together in one Image datablock, which is linked from a shader node in a Material datablock. This Image datablock maybe be linked by multiple Material datablocks for the purpose of baking multiple meshes and materials to one image.

Ok, I think this is the core of it. If the same datablock always bakes to the same image, I can see why there would be the above limitations. The difference in design is essentially multiple inputs → one datablock → one texture result vs. multiple inputs → one datablock → multiple texture results. In the first, the datablock would be in the materials that one wants to bake from and be an output, while in the second it would be in the materials that one wants to bake to and be an input. I think the second is more in line with how textures are currently thought of and used in Blender, but which is more useful/intuitive might only be known after some real testing.

Either way, thanks for considering the ideas and being so open to feedback! Looking forward to seeing how this progresses.

2 Likes

Let me tell you is the wrost idea i never seen… if you want make that just create a plugin in python and add it in blender like node wrangler… Basicly when i seen you can edit the node tree when you push TAB … i see that like an add-on… Nahhh you plans isn’t good… he need an higher level… expect just for a few nodes… Nahhh please forget the project really… is not what the people want… and never mentionned anywhere… you are not coherent with the basic demand.
but one thing is good for bake he can be have this own place in … texture nodes

1 Like

If you bake this example to a single image, I imagine you would use multiple texture stacks. Maybe you have multiple texture stacks in one texture datablock, but it seems inconvenient to me.

But also, circular relations like object -> mesh -> material -> texture -> object are just problematic in general, it’s the type of data design that you want to avoid when possible. Not just in Blender, but as a general principle in software design.

This is another thing where an alarm goes off for me regarding software design principles, a texture datablock owning the data of its instances is also circular, but in a more subtle way.

3 Likes

To be clearer I will list people’s requests below because I think you have not really understood what the community is asking for…

  • Layer nodes
    Who is able to add layers of any type with the ability to take akphas into account.

  • More filters
    For example, it can be a radial blur filter like a linear one, or filters inverting the colors and so on…

  • A baking node for textures
    The one if that allows a much simpler result rather than going through complex manipulations

  • A node that replaces the DRIVERS Which are far too complex to use and not well known, I am thinking in particular of the scene time that can be found in the geo nodes and imported into the shader editor

  • the possibility of using these nodes both in the shader editor and in the textures editor.

  • A live preview of the textures in the shader editor

That’s the basic request! And now I don’t really know what you’ve been thinking about, the impossibility of doing all that in the shader editor! The problem is that you don’t give us the possibility to use it in the shader editor… but that’s what people want, I’m sorry to come across as a cheeky or aggressive person, that’s not the case but I defend the convictions and the request. We’ve wanted this for a very long time, and now that the subject is on the table, we have to talk about it and defend our ideas and weigh the good and the bad.

Despite the fact that a rewrite may be required for certain parts, this remains the best solution!!
All mockup i prposed !



2 Likes

Hi @brecht thanks for answering.

I see your point, I’m trying to wrap my head around the question and imagine the scenarios. trying to f

  • output baked textures - Let’s hypothesize a simple scenario: 2 low res meshes with two materials writing on the same output node tha points to the same external texture path (the two meshes have non overlapping UVs in 1001 tile). So, basically the output node in the two materials assigned to the two mesh objects has the same image datablock that gets populated at bake time. It works just like current (3.0) baking method when “clear image” is off and two materials share the same active texture to bake to. The difference is that the image in the datablock gets automatically written to file, just like the compositor output node. The difference here is (I guess, I’m no developer) is that the image needs to be stored internally in a datablock because it is completed in several bake “passes”, in this case 2 “passes”, one per mesh and relative material.
    In the specific case of writing to file, the image datablock could be internal to the output node and trashed as soon as the final texture is written on file.
    To summarize, for this case, the datablock is an image datablock, unique, shared across all the output nodes instances on all the materials , the crucial point is that it gets populated in different moments, by different “calls” of every material. It is user responsibility to prevent data not to be overwritten (by avoid uv overlapping between objects)

  • cached baked textures - I suspect this case is different. I just see the cache node having a unique internal (not even necessarily exposed to the user) image datablock. Let’s say we have the same wood material on two meshes with different UVs, a table and a chair. We are not even baking for games. We just want a procedural wood material to be blurred. The two models are not supposed to be exported or packed, infact they occupy the same UV space. In that case, the cached image for the “blur” node, has to be unique per material instance, since the two meshes have different topology and UVs. So to answer, (at least for this specific case) Cached baked textures are stored in an image datablock that is always specialized per instance/material/mesh

  • auxiliary baked textures - Let’s say we have a 3 low-res objects, with non overlapping uvs and two materials. I think In this case, the answer is, like the first case, One image datablock, shared across the 2 materials, populated in 3 passes (one per mesh) when the auxiliary maps are generated.
    I’m thinking about possible benefits of linking the shared image datablock to the mesh instead of the material, since this maps are always topology and UV dependent (or not?).

So, to summarize, for Output and Auxiliary Baked textures, just like your design, but for Cache Images, I think the datablocks should not be shared across materials/shaders.

I hope it makes sense and I’m not saying nonsense :slight_smile:

2 Likes

Ok, that’s closer to my design than I expected.

For cached textures, I don’t see the need to handle them differently. If you have two meshes that occupy the same UV space and you want to bake auxiliary and output textures for them, you will already have to ensure those are baked to two separate images. And so if that’s already set up, there is no need for additional specialization for cached textures?

Moreover, for the case where you are baking multiple meshes and materials to one output texture, it would be inefficient to use more than one cached texture. Because if you have set up your UV maps to only occupy part of a larger image for each mesh, there would be a lot of wasted pixels if every mesh had its own cached texture.

So for that reason I think a tight coupling between the auxiliary, cached and output textures makes sense. It’s also why I don’t have a “Output” node in the design, but rather have all this functionality in the “Texture Channels” node handling all 3.

1 Like

From what I’ve read in the linked proposal, it seems like there will be a new node space for textures, with new nodes, and ways to bake them etc.

Can you clarify why all of this wouldn’t just be integrated into the shader node editor?

It seems to me that having two different environments, with overlap, when there could just be one, would add complexity to the UX as well as potentially create fragmentation / duplicate work on the part of designers / developers.

To expand it even further, I don’t know why Blender doesn’t just have one node editor where you can connect everything to everything else… But that’s a bigger topic. In regards to this one it would seem to be a much cleaner and flexible system to have it all in one workspace e.g. the shader editor, where you can already do node based texture work…

2 Likes

There’s a potential problem in my design related to baking multiple materials to one image. Because I think you often want to not just do that, but also bake multiple materials into one material. That’s how it would be set up in a game engine, and also for rendering in Blender it would be a bit more efficient to have fewer materials (though not as much).

I don’t know how serious that problem is or how best to resolve it. If all you’re exporting from Blender is image textures, it doesn’t matter. If you export e.g. glTF or MaterialX and use the material binding from that in your game engine, you can end up with too many materials.

One solution could be to set up a single material in Blender, and then link multiple texture datablocks into it. There would need to be something that determines which texture gets used for which mesh or which part of the mesh. I guess this is what ID maps are often used for, but it would add another level of complexity to the design.

1 Like

But sometimes you don’t have any choice to place a new system for an higher levels… and this for that is needed to implement it in the shader editor because material stay materials.

I’ve talked about that in various other replies in this topic. I know there’s a lot to read, but repeating myself will only make this topic longer. Here’s one:

2 Likes

Thank you for the explanation, that does make sense!

While Texture Nodes and Shader Nodes would have a lot of the same/similar nodes, there may be enough difference in how they work that it could cause a lot of confusion if they were put in the same graph.

I think the big difference is basically how placing details works. For shaders that render 3d scenes such as Shader Nodes in Blender, for each pixel on the screen the shader is basically told some information, such as the position, normal, and UVs, and then asked what color is this pixel? The shader doesn’t know about the other pixels. This means that you must decide where to place a detail before you generate it. You can’t take a detail from another pixel and move it here, you can’t ask look at those other pixels to blur it with this one.

When working with textures you can input a texture, with all its already pixels known, and can easily adjust its position, rotation, and scale. You can create multiple copies of a texture, move them around to different spots, or use a random scatter node to place hundreds of copies, with overlapping details if you wish (such as a pile of fallen leaves), you can ask about the other pixels to blur them together.

Because of this, many procedural patterns are a lot easier to do in tools like Substance Designer, than Blender’s shader nodes.

Another difference is that the shader can know specific details about the mesh, such as the position, uvs, normal, etc, which means that shader nodes can use a node for those to get it instantly, while you have to bake a texture for each of those if you want access to them in the texture nodes.

I think it could be quite confusing for new users to have 2 systems with mostly the same nodes in the same graph, but having some parts of the graph not be compatible with other parts.

Also, as brecht said, if it’s a separate system, the textures created in it could be used inside other systems. One possibility would be creating procedural alpha textures for texture/sculpt brushes, that can be adjusted on the fly, such as grunges, screws, flowers, simple shapes like polygons, or arrows, etc. You could also work on a texture and create a displacement texture on it, which could be used in a displacement modifier or geonodes to automatically add shape to your mesh, so that you can export the mesh with that displacement to a game engine. You could still bake a standard image texture for displacement, but it would be less interactive.

I think for most things, with a simple option to automatically generate a material with a Principled BSDF hooked up to the texture, you won’t even need to look at the shader nodes, and if you’re planning to render it outside of blender, such as in a game, you may not need to care about the shader side of things for where a custom shader is needed as it would need to be created in the other software.

8 Likes

Thank you! This was the explanation I was looking for in terms of laying out the benefits to a separate system. That made total sense to me, given my limited experience with textures. Much appreciated. :vulcan_salute:

As Lichen said, this is really well laid out to which i also agree, having worked in SD and SP i can say that Blender shaders are cumbersome for the game design workflow which requires the procedural/texturing approach :vulcan_salute:

Hi @brecht, that sound totally reasonable, but the reason I thought the cache bake datablocks should be treated differently is taking into account a simple case, where the user doesn’t even Want to bake the output.
Again let’s just imagine a simple material to be used for render in blender. It’s just a Principled bsdf with a raster triplanar mapped, tiled albedo, followed by a blur node. Nothing more . The material is supposed to be reused all over the scene, on multiple different objects. How will the blur node will handle the caches? If the cache is unique at least per mesh, (meaning that it’s unique per material instance, but blender is clever enough to share it between same meshes), the blur node can work pretty trasparently , provided that the objects have uvs.

If the cache is always shared between material instances, the user should make a specialized copy of the material per type of mesh just to use a blur node. UX-wise it sounds limiting. Of curse there are technical requirements for the blur (and any need-a-bake node) to work, like at least having proper UVs (even though can think about generating auto uvs on the fly when not present, just like game engines do for lightmaps).

About separating the output from the texture:

Let’s first clarify the terminolgy I’m gong to use:

Color Channel: the channel of the single color output like R , G , B or alpha

Texture Channel: a single component of a texture like Albedo, Roughness, Metalness ecc. That is made of color channels itself

The benefit of separating texture from bake output node is for packing texture channels in color cannels of the output image.

Eg. Using a separate texture channel node and then combine RGBA to put the albedo rgb in the rgb color cannels of the output image and the metalness Value in the alpha of the output image

Or do you have others ideas to Achieve this important feature while keeping texture and bake output coupled?

Edit: I Was thinking about managing arbitrary multiple risolution outputs. I can see a texture node-output node decoupling benefit as well.

Provided that there is a scene bake settings datablock with a “global project resolution” datablock, accessibile from an hypothetic bake tab in the properties editor, the user could choose if override the global resolution in the output node.

I can even see the benefit of decoupling resolution override settings from the output node itself, using a “reformat” node. Other software do that and it’s a very flexible and robust design to manage multiple output formats.

If no resolution is specified, the global project resolution is used, but the user can specify if override/replace the project resolution or for example doubling or halvening it at node level.

I don’ t want to go too Much off topic, but this concept of override the scene settings could and should be extended to other areas of blender IMHO, like render settings overridden per camera : resolution, frame range, samples ecc. This would add a huge benefit when one wants to render multiple takes of the same action with overlapping frames. This would allso require a general “task editor” to schedule camera render priorities.

This may sound unrelated, and I totally understand that this idea is probably out od the scope of the proposal.
But While designing the texture export workflow I invite to keep this Idea of generalized tasks in mind, because, it can be applied to geometry nodes as well, when “export mesh to file node” will be implemented. If well designed , the task editor could be used to export model assets and baked texutres including lods and multiple resolution texures in one click.

After all, if all the asset creation process is managed in blender, the user will likely want to export textures and models in bundle. And i think it could be a very powerful way to setup a pipeline with geonodes+texture nodes+ task editor (or even task nodes!)

1 Like

Such a fully automatic caching/baking system is definitely not planned as part of this project, it opens up too many additional problems. But also, if you have solved that problem you’ve solved it for input and output textures too, it’s equally hard. And so then it could be an auto bake option for the whole thing.

What you’re also describing here is a partially baked, partially procedural texture. It may be interesting to consider that, but I think that’s orthogonal to the distinction between manual and auto baking.

In the design proposal, channel packing is intended to configurable through settings on the Image datablock. If it was done through arbitrary nodes, Blender would not know how to do channel unpacking for image editor display and rendering the material using those textures

It is indeed. Let’s stay on topic here and not start discussing that, this topic is already long enough.

4 Likes

Thanks Brecht.

If channel packing is included in the image data block, that sounds good to me. (I missed that part apparently, sorry)

Sorry for the long off-topic, I had these ideas in mind for ages, and seing your proposal felt like a dream coming treue! I didn’t want to miss the opportunity to give hints to a broader design.

Thanks for all the hard work and I can’t wait to see more design updates!!!

1 Like