Hi @brecht, that sound totally reasonable, but the reason I thought the cache bake datablocks should be treated differently is taking into account a simple case, where the user doesn’t even Want to bake the output.
Again let’s just imagine a simple material to be used for render in blender. It’s just a Principled bsdf with a raster triplanar mapped, tiled albedo, followed by a blur node. Nothing more . The material is supposed to be reused all over the scene, on multiple different objects. How will the blur node will handle the caches? If the cache is unique at least per mesh, (meaning that it’s unique per material instance, but blender is clever enough to share it between same meshes), the blur node can work pretty trasparently , provided that the objects have uvs.
If the cache is always shared between material instances, the user should make a specialized copy of the material per type of mesh just to use a blur node. UX-wise it sounds limiting. Of curse there are technical requirements for the blur (and any need-a-bake node) to work, like at least having proper UVs (even though can think about generating auto uvs on the fly when not present, just like game engines do for lightmaps).
About separating the output from the texture:
Let’s first clarify the terminolgy I’m gong to use:
Color Channel: the channel of the single color output like R , G , B or alpha
Texture Channel: a single component of a texture like Albedo, Roughness, Metalness ecc. That is made of color channels itself
The benefit of separating texture from bake output node is for packing texture channels in color cannels of the output image.
Eg. Using a separate texture channel node and then combine RGBA to put the albedo rgb in the rgb color cannels of the output image and the metalness Value in the alpha of the output image
Or do you have others ideas to Achieve this important feature while keeping texture and bake output coupled?
Edit: I Was thinking about managing arbitrary multiple risolution outputs. I can see a texture node-output node decoupling benefit as well.
Provided that there is a scene bake settings datablock with a “global project resolution” datablock, accessibile from an hypothetic bake tab in the properties editor, the user could choose if override the global resolution in the output node.
I can even see the benefit of decoupling resolution override settings from the output node itself, using a “reformat” node. Other software do that and it’s a very flexible and robust design to manage multiple output formats.
If no resolution is specified, the global project resolution is used, but the user can specify if override/replace the project resolution or for example doubling or halvening it at node level.
I don’ t want to go too Much off topic, but this concept of override the scene settings could and should be extended to other areas of blender IMHO, like render settings overridden per camera : resolution, frame range, samples ecc. This would add a huge benefit when one wants to render multiple takes of the same action with overlapping frames. This would allso require a general “task editor” to schedule camera render priorities.
This may sound unrelated, and I totally understand that this idea is probably out od the scope of the proposal.
But While designing the texture export workflow I invite to keep this Idea of generalized tasks in mind, because, it can be applied to geometry nodes as well, when “export mesh to file node” will be implemented. If well designed , the task editor could be used to export model assets and baked texutres including lods and multiple resolution texures in one click.
After all, if all the asset creation process is managed in blender, the user will likely want to export textures and models in bundle. And i think it could be a very powerful way to setup a pipeline with geonodes+texture nodes+ task editor (or even task nodes!)