Layered Textures Design Feedback

Let’s first get the terminology clear here. When I refer to a “texture” in Blender, that’s a procedural, layered texture datablock, not a baked image texture.

My disagreement with @Kenzie is that I think such a texture datablock should not be hardwired to any particular mesh or baked image texure, in that it should not explicitly reference a specific object/geometry and should not itself contain the baked image texture it produces.

However if I’m understanding correctly, your concern is different. You want this system to be able to produce a set of baked image textures that is shared across multiple meshes, that are all fit together in a single UV map. Those baked image textures are hardwired to that set of meshes. At least in the sense that they are always baked using those meshes. Potentially if you’re creating a trim sheet you would then later reuse those image textures as part of other texture datablocks on other meshes.

Figuring out the way to handle multiple meshes like that is an important part of the design. If we assume the design from the blog post, the baked image textures are stored in the material datablock. If the same material is applied to all those low poly meshes, the baked image textures would naturally be shared between all of them, and any bake operation could detect that and bake all low poly meshes together. And of course they would all be rendered using that material in the Blender viewport, to preview what the it will look like in the game engine.

If baked image texture are associated with meshes, Blender could still detect which image textures are linked by multiple meshes, though then you need to create some UI to visualize and edit those together easily.

However there’s another complexity, which is if you want to use different texture datablocks for different low poly meshes, and then still bake them together into a single set of image textures. This too Blender could detect by seeing which materials or meshes share baked image textures, and bake them together. Alternatively a single material could have some way to reference different texture datablocks for different objects. Or perhaps you need a “Image Texture Set” datablock that can be linked from multiple materials or meshes, rather than the individual image textures being what binds things together.

If you have specific ideas for how it should work, please make suggestions, that’s what this topic is for.

9 Likes

Yes, I think that kind of control should be added to baking settings.

7 Likes

Not arguing for it to be completely hard wired into the node graph. In my example the source of the geometry could have been brought in from outside meshes via the geometry socket through an object picker or generated and parameterized internally as part of the procedural texturing setup. Both are valid use cases and the Baker exists as a tool to support it. I feel like adding more manual/fixed meaning settings in the properties pannel goes in the opposite direction of “everything nodes”. There are a decent amount of parts of the baking process that could benefit from settings being exposed via fields rather than relying on cumbersome scene setups. If we are soley worried about reusability: a lot of geometry nodes graphs in the wild still have this problem where they depend on outside scene objects that may hinder it’s usability without tweaking. The end goal of a graph isn’t always to be infinitely reusable.

I have re-read it again and took some time to process it. It looks really solid, and I think it would work. Here are some thoughts on the closing questions:


Is making a single texture datablock for many uses a good idea? Or does it make the workflow too fuzzy? Is there a better alternative? If not, how would filtering relevant texture assets work exactly?

Yes, I’d leave it up to users. Even if we did not have the modern tools like Asset Browser, naming conventions could be used. Now that we even have sophisticated tools like Asset Browser, users have choice between naming conventions, tagging and categorizing. That seems sufficient.

It’s actually even better, since for example you can use same texture datablock to displace a mesh in GN and texture it in Material Editor using the same texture with same mapping coordinates. Right now, in order to do that, you need to keep two separate systems manually in sync.


Are there better ways to integrate textures into materials than through a node? Is it worth having a type of material that uses a texture datablock directly and has no nodes at all, and potentially even embeds the texture nodes so there is no separate datablock?

While simplicity is almost always desired, I can think of rare scenario where someone would have a Texture Nodes (TN) datablock and they’d want to ray switch (light path based mix) for example diffuse channel before it goes into the material. Since light path node won’t be available inside the TN, having materials that directly use datablocks would make TN essentially incompatible with nodes like Light Path.


With this system, the users can now do the same thing both in texture and shader nodes. How does a user decide to pick one or the other, is there anything that can be done to guide this?

I suspect right now, most users actually want to do what the Texture Nodes are supposed to do. That while they are mixing shaders, they more often that not intend to mix only the channels of the PBR shader, since that’s the workflow people expect from other apps like SP too. The reason they use shader mixing and not channel mixing is because current system doesn’t allow the latter natively, so it requires some ugly workarounds resulting in very messy node networks.

There should be a general rule of a thumb set - Are you 100% sure you need to mix actual materials, and understand the difference between material mixing and material channel/attribute mixing?

  • Yes: Use Shader Editor
  • No: Use Texture Nodes Editor
  • Unsure: Use Texture Nodes Editor

I was thinking about the fact it will be rather awkward situation that both the TN and Shader Editor will have large set of nodes which overlap. Initially, my idea was to actually reduce the amount of nodes available in the Shader Editor, so people do those things more properly in TN, but then I realized that doing that would once again remove the workflows that rely on nodes which won’t be supported inside TN, such as Light Path.

So the best course of action here seems to have both Shader Editor and TN be able to do almost the same, and re-evaluate the situation after a few versions to see if people still need all of the nodes in Shader Editor, or if there are some workflows that literally pretty much no one uses.


What is a good baking workflow in a scene with potentially many objects with many texture layers? In a dedicated texturing application there is a clear export step, but how can we make a good workflow in Blender where users must know to bake textures before moving on to the next object?

This is relatively clear. The ON/OFF state of the texture baking process should be stored on the object. The filename and output path of the baked textures should be stored in the material or TN datablock.

Then, the global render settings button to bake the textures, when clicked, should iterate over all scene objects, see if they have material with bake output and are enabled (set to perform the bake), and bake it. Baking all of the scene objects should be matter of just a single click, and user decides what should be baked and when by turning the bake state on and off on the object basis.

The texture baking file output system should support expressions, for example: "Assets/Textures/[objname]_[channel]_[resolution].png" so that the very same TN can be used on multiple objects and output unique textures for each:
Chasis_BaseColor_4094.png
Wheels_Normal_1024.png
(This is just an example, obviously it will have to be thought through more).

Here’s practical Example. Scene has Objects: 1, 2, 3, 4, 5, and 6, and then 3 materials with TN: A, B and C.

Object 1 has Material A, which has TN set up to bake “Color.png” texture. No expressions used in the filename. Object 1 occupies left half of the 0-1 UV square.

Object 2 has also Material A with the same TN datablock. Object 2 occupies right half of the 0-1 UV square.

Object 3 has Material B, which has TN set up to bake [objname]_Color.png. Object 3 UVs occupy the entire 0-1 UV square.

Object 4 also has Material B with the same TN datablock. Object 4 UVs occupy the entire 0-1 UV square.

Object 5 has Material C, which has TN set up to bake “Displacement.png” texture. Object 5 UVs occupy the entire 0-1 UV square.

Object 6 also has Material C. Object 6 UVs occupy the entire 0-1 UV square. Object 6 has the baking node set up fully, but baking state disabled.

Clicking the Bake button would produce following:

  • Color.png texture where Object 1 baked color map occupies left part of the texture and Object 2 baked color map occupies the right part.
  • Object_3_Color.png file with baked color map of object 3.
  • Object_4_Color.png file with baked color map of object 4.
  • Displacement.png file with baked map of Object 5,
  • Object 6 was not baked and did not overwrite the bake of Object 5, because the bake toggle was disabled on it.

Some textures can remain procedural while others must be baked to work at all. How can we communicate this well to users? Is it a matter of having a Procedural / Baked switch on textures that can be manually controlled, or is there more to it?

I do not have clear answer to this yet, but what I’d say is crucial is to distinguish two different classes of baking - Input and Output.

Input baking is generating texture maps such as convex and concave area masks (curvature and occlusion), etc…, to be used to drive procedural effects materials.

Output baking is baking and collapsing the final visual result of the materials into a texture usable both outside and inside of Blender.

While both of these use cases could share features, it’s important to distinguish these are unique processes for unique purposes and therefore require different workflows.

I am thinking about a possibility of perhaps having a “Cache” node usable only inside TN. The cache node would have following parameters:

  • RGB, Vector or Float input socket (selectable)
  • Resolution
  • UV Channel
  • Output path

Resolution would define the resolution of cached texture

UV channel would define the UV channel to bake the input into

Output path would define optional storage of the texture outside of .blend file. If left unspecified, the texture would be stored within the file.

The cache node would work such that anything plugged into it would be cached and baked into the texture.

The drawback of this workflow would be that some nodes that absolutely require caching in some context, such as Ambient Occlusion or Blur, would simply not work out of the box, and would show a warning message they require a Cache node.

The benefit would be that the points of Caching are more explicit, so that the user does not unknowingly cache a lot more texture states than they intended, running out of GPU memory fast.


Can we improve on the name of the Texture datablock or the new planned nodes?

Since we have Geometry Nodes, and we are planning Particle Nodes, then “Texture Nodes” is obvious choice :slight_smile:

2 Likes

My concern with baking nodes inside a texure datablock is not so much reusability. Often they will be specific to a certain mesh or set of meshes. But rather, think for example of the case where you bake multiple meshes each with potentially a different texture datablock into one image texture. How does this work when the baking is done inside the texture datablock?

Sounds like a good way to combine texture sets, a problem often faced when making game assets/lods.

In my example I split it into 3 different nodes (not a design requirements just my thinking) one for geometry bakes (we can safely ignore that in this example of multiple texture sets). One that just bakes the rendered cycles passes like the current one does. And one that evaluates a attribute field all over the input geometry. In my example if a user wanted to combine the results of all the materials on a mesh the render baker would likely do the trick, if they wanted to apply a procedural pattern over all potential surface area they would use the field Baker.

(Note going to bed)

2 Likes

I don’t think you’d want to use a whole different baking mechanism depending if you combine multiple texture datablocks (or materials) into one image texture or not.

In my example I just split it up into different nodes like that. You could definitely combine all those features into an uber baker node. It just a side effect of the “render Baker” just functioning like the current Baker. The attribute field Baker in the example just functions like any attribute field works in geometry nodes and thats why it doesn’t really have a dependence on what texture set it is on, it just takes an expression and evaluates it each pixel.

This seems designed for a different workflow than the one @Moniewski has in mind, where you work with multiple objects. Here you have a single object with potentially multiple materials that are baked together.

Perhaps a [collectionname] token could make it cover both use cases, though it’s still not obvious in that case on which datablock the bake settings are stored then.

The design problem is not what the node is, but where the node and associated bake settings are located, to support the various different workflows.

I provided an example below of multiple separate objects being baked to a single texture set too. :slight_smile: Basic idea is that when multiple objects have same Material with same TN datablock which has same output path and same filename, they will all bake into the same destination texture. If the different objects together have overlapping UVs, then that will cause issues, but that already causes issues in the current state of Blender, as well as in the other applications, such as SP. So taking care of that will always be user’s responsibility.

Baking multiple separate objects into the same texture is flexibility, which comes at the cost of having to be more careful. When working in the scope of just a single object, it’s easier to prevent UVs from overlapping.

2 Likes

But what if you want those separate objects to have different materials and texture nodes?

You can manually ensure those have the same output names, which is a bit fragile but possible. But then still, where do you put the shared settings for resolution, channel packing, margin, … ? Or do you just also specify those multiple times, and hope it all matches up.

You mean you’d have two separate objects with two separate materials and two separate textures and then you’d want both of them to be baked into the same texture?

My understanding is that @Moniewski wants to be able to do that, yes.

Alright, let’s establish a practical example scenario, as it will be easier.

We have a model of a knight. Knight is wearing some clothes, some leather things such as boots and gloves, and metal armor on top of the clothes.

For me, the common workflow would be to keep the knight model as a single mesh with multiple material slots. But let’s say that for workflow reasons, the user wants to keep the object separated by materials, so that metal armor is one object, gloves and boots other objects, and clothing is the last object.

Now, we intend to bake all these objects into one PBR texture set.

My naive expectation would be following:
I’d fist select all the objects at once, enter edit mode, verify that their UVs are not overlapping each other, and re-pack them if they are.

I’d then set up the bake nodes for individual channels, like BaseColor, Metallic, Roughness, Normal and so on. In this case, the filename pattern would be let’s say Knight_[channel].png.

I would then wrap these in a node group.

I would put this node group into each of the materials: Metal, Leather, Cloth. I would plug appropriate channels into the node group. The logic here is simple. The baking settings are shared, so I am using the tool intended for synchronization of the data between separate datablocks: the node group.

I’d then click bake, and I’d expect to get:
Knight_BaseColor.png
Knight_Metallic.png
Knight_Roughness.png
Knight_Normal.png

In real world, I’d be more likely to set up RGB 8bit bake texture and pack the Metallic and Roughness together using Combine RGB node.

Any changes to the resolution or output of these materials would be done inside the node group, therefore propagated.

This leaves us with a room for error, where user can set up two unsynchronized bake nodes on separate materials and separate objects with the same file output path, same filename, different resolutions and overlapping UVs. Then they can wreck their baked data. But then again, same errors are possible in other software too, as well as Blender in its current state. So I am not sure whether there should be some error prevention mechanism for this. If there should, then I am not sure how it would distinguish between error an intention. It may be somehow possible though.

2 Likes

In my proposal the bake settings are all on the “Texture Channels” node, rather than separate bake nodes. There’s two reasons why separate bake nodes pose challenges:

  • We need to cache/bake input image textures for AO, curvature, etc. When all the bake settings are on the Texture Channels node, the settings for input and output image textures can be together. If the bake nodes for output textures are separate, it’s not obvious how input textures are handled.
  • To view the baked material in Blender, we need to be able to construct shader nodes that read all the (packed) channels correctly from the image textures. If these are fixes settings, we can easily generate the corresponding shading nodes. However if it’s an arbitrary node graph with e.g. Combine RGB, we can’t do that. So users would have to manually set up the shader nodes for that.

As far as I can tell a shared shader node group mechanism relies on bake nodes, so it’s not obvious how to make that work then.

For that reason I’m still thinking about some way to share bake setting and baked image textures on a Texture Channels across materials and meshes. But it’s certainly not the only option.

I love the design with Layers and LayerStack but perhaps it is possible to implement it without a new datablock and simplifying Baking.
I make a radically different proposal to provide ideas.
I would start by renaming the Shader Editor to the Material/Textures Editor.
As the name indicates, this editor should have Texture Outputs, in addition to the Material output.
According to the active output it would be treated as a material or a texture. Besides they should be able to add tags to catalog it.


On the other hand, I think Baking at any point in the node tree would be more flexible.
Although the attached image is very crappy, I think it shows the concept.
Baking a node is equivalent to rendering the tree including the selected node itself to convert it to a texture or list of textures if it is a Layer.
Baking to material is equivalent to converting the shader to texture, for this reason it is necessary to know the object and the camera from which the rendering is produced (currently it is possible to connect a texture to the surface of the Material Output, it acts as an emitter).
Bakes can be done on selected object or any other, and the point of view can be the active camera or any other. By default the selected object and the active camera.
At any time you can remove the bake container so that the texture or material becomes procedural again.
Baking parameters appear in the sidebar when selecting the specific Bake node, as well as the export path.
The bake node can be refreshed at any time, or silenced.
Bakes refresh should also be produced in a general way, either by material/texture, by object, by collection or by scene.

I hope to contribute something.

4 Likes

One thing I’d like to bring up is that material authoring process and texture export should ideally be abstracted from each other, being separate workflow stages. For example, I’ve created a wood using default PBR shader and textures needed for that, but once it is made, I can then export that as a different set of textures needed for my game engine. Of course, not everything can be converted and some loss of quality might happen if the target texture set misses some of the textures, but you get the idea, you shouldn’t be authoring materials that work only for a single setup.

There are presets in other texturing software that help with doing similar export conversions. Bottom line is reducing monkey work that artists would need to do otherwise.

@brecht there are several workflows here.

One, a single object with a single output bake, single UV and textures. Examples, a cup, a pencil or notebook, etc. Output is a single set of textures.

Two, a single object with multiple bakes, one for each material. Example, a weapon (the stock and the metal being different materials). A character, furniture, etc. Output is multiple sets of textures, one for each material for the final object.

Three, several objects with each a different material/UV, baked together into one map. This is done to avoid intersection errors typically in the bake. Examples, any of the above objects, but typically ones with elements are right angles are best baked individually and combined. Note this is a bit of a pain, and ideally would be avoided as the asset has to be combined for final export anyhow, but is necessary due to limitations of baking technology. Output is a single set of textures.

Finally, many objects with different UV’s/etc to be baked together - a batch mode essentially.

At the moment I don’t see that the initial design is a limitation with this. How you accomplish this would be in the layout of the UI for setting up the bake parameters. A bake would specify N input objects, and N output textures, with indications for which type of bake it is.

What type of conversions are you referring to? Channel packing, metallic to specular, something else?