Layered Textures Design Feedback

Hi @brecht, that sound totally reasonable, but the reason I thought the cache bake datablocks should be treated differently is taking into account a simple case, where the user doesn’t even Want to bake the output.
Again let’s just imagine a simple material to be used for render in blender. It’s just a Principled bsdf with a raster triplanar mapped, tiled albedo, followed by a blur node. Nothing more . The material is supposed to be reused all over the scene, on multiple different objects. How will the blur node will handle the caches? If the cache is unique at least per mesh, (meaning that it’s unique per material instance, but blender is clever enough to share it between same meshes), the blur node can work pretty trasparently , provided that the objects have uvs.

If the cache is always shared between material instances, the user should make a specialized copy of the material per type of mesh just to use a blur node. UX-wise it sounds limiting. Of curse there are technical requirements for the blur (and any need-a-bake node) to work, like at least having proper UVs (even though can think about generating auto uvs on the fly when not present, just like game engines do for lightmaps).

About separating the output from the texture:

Let’s first clarify the terminolgy I’m gong to use:

Color Channel: the channel of the single color output like R , G , B or alpha

Texture Channel: a single component of a texture like Albedo, Roughness, Metalness ecc. That is made of color channels itself

The benefit of separating texture from bake output node is for packing texture channels in color cannels of the output image.

Eg. Using a separate texture channel node and then combine RGBA to put the albedo rgb in the rgb color cannels of the output image and the metalness Value in the alpha of the output image

Or do you have others ideas to Achieve this important feature while keeping texture and bake output coupled?

Edit: I Was thinking about managing arbitrary multiple risolution outputs. I can see a texture node-output node decoupling benefit as well.

Provided that there is a scene bake settings datablock with a “global project resolution” datablock, accessibile from an hypothetic bake tab in the properties editor, the user could choose if override the global resolution in the output node.

I can even see the benefit of decoupling resolution override settings from the output node itself, using a “reformat” node. Other software do that and it’s a very flexible and robust design to manage multiple output formats.

If no resolution is specified, the global project resolution is used, but the user can specify if override/replace the project resolution or for example doubling or halvening it at node level.

I don’ t want to go too Much off topic, but this concept of override the scene settings could and should be extended to other areas of blender IMHO, like render settings overridden per camera : resolution, frame range, samples ecc. This would add a huge benefit when one wants to render multiple takes of the same action with overlapping frames. This would allso require a general “task editor” to schedule camera render priorities.

This may sound unrelated, and I totally understand that this idea is probably out od the scope of the proposal.
But While designing the texture export workflow I invite to keep this Idea of generalized tasks in mind, because, it can be applied to geometry nodes as well, when “export mesh to file node” will be implemented. If well designed , the task editor could be used to export model assets and baked texutres including lods and multiple resolution texures in one click.

After all, if all the asset creation process is managed in blender, the user will likely want to export textures and models in bundle. And i think it could be a very powerful way to setup a pipeline with geonodes+texture nodes+ task editor (or even task nodes!)

1 Like

Such a fully automatic caching/baking system is definitely not planned as part of this project, it opens up too many additional problems. But also, if you have solved that problem you’ve solved it for input and output textures too, it’s equally hard. And so then it could be an auto bake option for the whole thing.

What you’re also describing here is a partially baked, partially procedural texture. It may be interesting to consider that, but I think that’s orthogonal to the distinction between manual and auto baking.

In the design proposal, channel packing is intended to configurable through settings on the Image datablock. If it was done through arbitrary nodes, Blender would not know how to do channel unpacking for image editor display and rendering the material using those textures

It is indeed. Let’s stay on topic here and not start discussing that, this topic is already long enough.

4 Likes

Thanks Brecht.

If channel packing is included in the image data block, that sounds good to me. (I missed that part apparently, sorry)

Sorry for the long off-topic, I had these ideas in mind for ages, and seing your proposal felt like a dream coming treue! I didn’t want to miss the opportunity to give hints to a broader design.

Thanks for all the hard work and I can’t wait to see more design updates!!!

1 Like

Assuming the Datablocks of the Geometry Nodes, Materials and Textures were the same, I have tried to organize it so that the edition of these is accessible, comfortable and visually unified from the Attribute Editor.
As can be seen in the first screenshot, in the current state it is almost impossible to understand a material tree of simple nodes in Attribute Editor, on the other hand it is not related to the Geometry Nodes, and I do not see relation to the Texture Datablock proposed.

In the second image, the Geometry Nodes, Costraints, Physics, Hair, Particles, Materials and Textures proposal would use the same Data Block and use the same nodes, the Output would mark the difference between one and the other category.
Although a material is shown in the model, and surely there are a thousand gaps, I understand that it can be extrapolated to any tree of nodes, and in the case of textures it could show the Layer Stack as a starting point.
NODE EXPLORER, located below the list of components in the Attribute Editor, would show the nodes calculated by the Output in a format similar to that of Outliner, and would only show the attributes of the selected node. From the displayed node attributes it should also be possible to navigate to its Inputs and Outputs using arrows to the left and right of these respectively. As an Output can be connected to more than one Input, clicking on the output of the Output should bring up a floating menu with all the nodes connected to it.

I hope this is not nonsense.

@brecht with Unreal 5 and Nanite (virtualized geometry) we now have the possibility of using channel packed baked maps in vertex colors.

It’s little different than showing how a baked texture could be used in an end product, which is useful to know as we support different use cases.

Apologies if this has been mentioned and I missed it, but the point is that we need the ability to bake a channel pack RGB texture into a vertex color, since Unreal vertex painting isn’t available for Nanite (it has to be pre-baked in the incoming asset)

image

There is also a free and open source addon which assists with this process

Please follow these guidelines, I edited the post to remove the embedded video.

I explained in the post in detail why I believed it did not violate the guidelines, but you can disagree of course.

Will viewport painting be improved?

2 Likes

Enough people liked this for me to motivate myself to make another pass at this :grinning_face_with_smiling_eyes:

Here i made/thought several changes to the usability and ease of use

  • (Left side) Here i made some adjustment to the “Local/Applied” button’s, its more clear now what each button does with a visual indicator
    Another note, if by chance there are people who find it difficult to see the button visual que, there is also the idea to add a Dash/Line under the text Local/Applied to make it more clear which button is active (as shown on the left below the preview windows)

  • (Left side) Here i also created a “what if you could change the display preview size” that could be in the N-Panel together with the “Preview on/off” button

  • (Right side) If anyone turns off the “Preview” under the N-Panel then every node will look like the “Layer node” as shown in the picture

  • (Right side) Here i applied the new Loca/Applied buttons to the layerstack, the preview area has a additional “dark” overlay to differentiate it from the nodes input/output that are above it

1 Like

Following the same intention, I have made another Mockup to show the same concept but with the texture Output, although adding a shader after the LayerStack and changing the output it would also be possible to have a Material one.

I have added a bake on an attribute to reflect how these could be displayed from the Node and Attribute Editor. In the same way, baking the LayerStack could obtain the texture channels that the BSDF materials require.

your channels output need and output like shader or RGB color socket… Because you are in a material editor…

If I wanted the group of nodes to act as a material I would connect the LayerStack directly to a SeparateChannels, then to the Shader and then to a Material Output. As in this occasion the group of nodes pretends to be a reusable texture and a Channel Output is well connected, in this way various materials, modifiers, brushes, or in post production should be reusable.

My intention is to show how the unified nodes could be displayed in the same Datablock, which would allow them to be edited both from the node editor and from the AttributeEditor.

Maybe it should be a LayerOutput instead of a ChannelsOutput?

I’m not sure I get the Applied/Local concept. Is it like before and after where one displays the input texture, and another displays the output texture? If that’s the case, I’m not sure I like how it works.

These nodes could have multiple input or output textures, so I think what might make more sense is that if you use your mouse to hover over one of the wire ports/sockets, it will show the texture at that port, if you hover over an input, it will show a before, and if you hover over an output, it will show an after.

Additionally, you could click on the port to make it permanently show that particular port in the preview. However, I wouldn’t want to have the ability to have some nodes showing before, and some showing after previews, so I think you should only be able to click on an output port to permanently change the preview, I think inputs should be temporarily shown on hover only.

I’m not sure but if your idea is exact you create just a texture, and with a new nodes for exemple called “textures call” we can call this texture and connect them to a mapping or any material nodes ?

Basically it is put in the shader editor the basic is create a compiled image and can be output on any node material if my logical is good…

1 Like

You have something mixed up, the point of the Local/Applied idea is to show

  • Local: The modifier itself how it would look without it being applied to anything
  • Applied: The modifier shown when it is applied to the input

There is no such a thing as showing the input itself without any modifications, as you have those shown in the previous node.
Example, you have a “Square” node that creates a rectangular square, then you connect that node to the blur node, then in the blur node you see how the “Square” looks with the blur applied or if you want to know how the blur looks without it being applied.

Blur must be applied to something to generate a preview image. So what is it applied to if not the input, some default image?

Blur might not have been the best example :sweat_smile:

I would assume that a “modifier” node with a predefined preview look is quite rare, because as you pointed out they need a input first to display the result, unless i assume that it’s visualised in a different more mathematical way maybe (ie. a Default image, either generated by the settings or a predefined one).

This suggestion came from @DrM so he might know more on this, or has more to say on it.

I’m not sure but if your idea is exact you create just a texture, and with a new nodes for exemple called “textures call” we can call this texture and connect them to a mapping or any material nodes ?

Yes, that’s my idea, and by the way remove the Shader editor, and simply differentiate the type of node by the main output.
In addition to the Geometry output, also add the Texture/Multitextures and Material outputs. Although a texture attribute can be obtained from Geometry Nodes, the grace would be that it could be cataloged, and could be called from another object.

I’m seeing some problems in my idea without going into bakes yet.
1- To apply a textured material without doing it on the same node, you should first create the texture node, call it from a material node, and call the material from a GeometryNode.
2- Do I have to call a FaceSet to assign a material?
3- To visualize the materials I need an assigned material, and if I don’t have it, PaintMode should lend me one to preview it.
4- When making an Asset of the LayerStack, it would also be convenient for me to visualize it on an object.
5- Can I call a GeometryNodes or Material from a Texture node? Who is allowed to call who?
6- What would it mean to apply the nodes of a texture or a material?
7- How does the order of the nodes affect the materials…?

I make proposals that are not even clear to me, but I suppose it is time to reflect.

I’ll stop you now… Geometry node and shader node can’t be assembled… he create a biggggg confusion in any big software … Shader and geometry was separated forgot this idea, is impossible sorry and is good like that ! both need to be dinstingued.

For my opinion i think this system is fully useless… just create a few new node for the shader editor resolve all the problem… i don’t know why everybody hurts this head so hard…Thems nodes was wanted such a long time. Just create them with an output and add id in the 3 node editor and bam the problem was solved… but the datablock need a serious investigation, and eventually a little rewrite…

  1. This is for that material output ins necessary you can’t applicate a material without output.

  2. Basicly yes

  3. Yes and no… a viewer is better… like any game engine a preview mode or a direct output on the mesh.

  4. Need a cache system ?

  5. No, because Blender have seperated both system, and the only geometry you can edit is with displacement maps ! I’ll request a new nodes like morphing node present in shader editor.

  6. Texture nodes can design ONLY a texture ! you can’t applied them directly on a mesh, you have an obligation to passe them by the shader editor, and apply them material with the texture created in texture nodes editor (But basicly nobody use the texture editor everybody create this own texture in shader editor is better with more option and directly result ! this is for that texture editor need to be gone)
    7.In logical order left to the right.