Hi all, that was a long thread to read, I hope I understood the main topic correctly.
My feedback so far is that I like the initial design very much and It’s actually almost exactly how I imagined a layered texture baking system years ago when I started using Blender.
I agree with @LudvikKoutny and @jonlampel , my feedback would be more or less the same.
I’m trying hard to quote people that already expressed their opinion to avoid repeating concepts already explained, but I have 7 chrome tabs open on the same thread as bookmarks and still can’t keep track of who said what. So I’ll try to explain some ideas about the baking workflow I had In mind.
First of all I’d like to conceptually separate (at least workflow-wise) different types of baked textures:
1) output baked textures: The final output baked textures that can be used outside blender or reused in blender (eg: texture set of a character to be sent to game engine).
2) cached baked textures: Any texture result baked at any point of the texture node graph that need to be baked at all to be processed (eg. input to a blur node). This can be seen as a generic cache or checkpoint node, like it was mentioned in a geo-nodes related blog post.
3) auxiliary baked textures: Any texture baked that is used to guide procedural generated effects (eg. AO, cavity, position, normal ecc.)
How the output baked textures are produced?
I imagine a node separate from the texture node, just like compositor output. The output node takes care of all the baking settings like resolution, format, path, names etc.
About the output domain (Texture, Attribute, Volume) The first thing that i can think of is that it can be a setting of the output node, eg: write to file, write to mesh attribute, write to packed datablock, write to volume, write to volume vdb file ecc.
Where do these output baked textures live?
By default in the output folder specified by the user, outside the blend file. But nothing prevents users to pack them as any other texture.
How the cached baked textures are produced?
I can see two scenarios:
a) A standalone “cache” or “checkpoint” node that can serve two purposes, avoiding heavy calculations when not needed before it, and being mandatory before any node that needs a baked raster texture such as “blur” .
Any “cache” node should have a “auto-update” checkbox to update it at every change in the graph before (when possible without compromising general responsiveness) and an “update” button to manually update it. In any case, there should be a centralized panel, bar or workspace with an “update all” button, to update (say, bake) all the “cache” nodes in the scene at once. I’ll talk about that in detail later.
b)The “blur” node and similar “need-a-bake” nodes, include the caching function itself, so the user doesn’t need to manually put a “cache” node before. The “need-a-bake” nodes, have the same “auto update” checkbox and the manual “update” button, and they get updated as well when the global “update all” button is pressed. This scenario doesn’t exclude the existence of a general “cache” node discussed before to be used to improve performance of course.
In any case, if the “cache” is not up-to date due to changes upstream, they should show a big warning badge (just like geo-nodes with missing data) and there should be a warning badge to be visible from a centralized area (again, I will explain it later). Plus if the user tries to bake output textures and any related “cache” node is not up to date, it should be warned as well (pop-up and/or info area).
Where do the cache baked textures live?
To me it makes more sense if these textures are packed in the file, to avoid cluttering the filesystem at the expense of the file size. But nothing prevents to allow a setting to specify where these cached textures are stored externally if the user wants to store them externally.
How auxiliary baked textures are produced?
As far as I can think, these are strictly related to the mesh or to a material that is specific to that mesh (or group of meshes that share the material, and have uvs packed in the same space, not overlapped).
I can see the same output node as the “output baked textures” to bake these kind of textures, but maybe they can be packed into the blend file by default, and have a collection input to determine the high resolution mesh and use suffixes as already suggested.
Where do the auxiliary baked textures live?
As already said, packed in the file by default. Specifically as a texture datablock in the material of the “low” model, or a new datablock tied to the mesh data (like uvs, shapekeys ecc).
In the case of the node in the material, the users can explicitly plug the node output in the procedural filters that require them.
If a datablock tied to the mesh is used, an input node that gives access to the mesh datablock can be used.
Centralized area for baking
A very important concept that I’d like to see is a centralized area to have an overview of all the bake nodes in the scenes.
To me, a new dedicated bake tab in the properties panel (as already suggested before) could be nice, I vote for a cooking pot icon
I’ have not yet a clear Idea of how the panel should look. But I imagine it having at least:
-
a button to update all the “need-a-bake” nodes at once
-
a list of all the output nodes present in the scene, with buttons to mark them “enabled” or “disabled” for bake like render layers
-
a button to bake all the “output bake nodes” marked “enabled” at once.
-
global resolution setting that is stored at scene level, but a mechanism to override it per-output node when needed, since as I’ve stated before, I imagine every output nodes to have control on resolution. On that regard, maybe every node could choose if use scene resolution or its own resolution.
-
an area to get access to all the output paths and output file names in one place
Let me know what you think. I’m pretty sure I’m overseeing a lot of things, but I hope this post will be useful as inspiration for more insights.