Layered Textures Design Feedback

Perhaps I misunderstood. I though some rasterization processes like blur needed input nodes to be baked.

My main concern is that the user should be able to bake specific texture nodes for faster iterations (i.e. shorter time for the bake process). Just baking a few texture nodes would be faster than always having to bake the entire node tree with all channels.

The example below shows a sculpt that gets refined over time. The artist is both working on sculpting and lookdev at the same time. At certain steps, the user will want to rebake the curvature/pointiness, since the surface has changed. In this use case, the output of the curvature/pointiness does not go into any texture node that has a rasterization process (like blur) and therefore only one texture node requires rebaking in order to see the textured result.

Perhaps what you are describing is applicable to the use case that I tried to illustrate above. In that case this is just a case of me misunderstanding parts of the proposal.

There is indeed a certain set of input textures that need to be cached to evaluate the nodes quickly (AO, curvature, …).

It is also important to cache at some intermediate points in the texture nodes, for example if you are painting a mask that blends two layers that are expensive to compute, it would be good to automatically cache those two texture layers for quick blending. Ideally we would have some automatic heuristics for that (just caching everything would have terrible memory usage). Perhaps some manual control is needed as well.

So there is indeed more going on than just a “Bake” button for everything, and I’m not sure yet how that will work. An “Update Geometry Cache” type button to manually recompute AO and curvature maps) seems needed. Both those buttons I imagine to be at the Material level.

For painting you could imagine some automatic mechanism that when you select an image texture or attribute to paint, it automatically caches whatever is needed to update interactively as you paint. But maybe that fails in some cases and more manual control is needed. In that case I would imagine e.g. a Cache nodes that you can insert, that Blender can still automatically figure out when to recompute, rather than e.g. manually rebaking a Blur node. Also being able to evaluate textures at a low resolution for interactivity should be supported.

5 Likes

Just a side note on this one - I had plenty of cases when I used the same material on several objects but only needed AO/Curvature baked in on some of them. Saved me couple minutes of baking time which can be handy in a production environment with fast turnarounds.

Glad to see some serious movement in this department as well, can’t wait to do some proper texturing in Blender.

1 Like

I just want to get some clarification about the whole revamp to the texturing aspects of Blender on future renditions.

@brecht What is going to be done about the UV tools? I’m for fantastic new methods of baking and painting workflows. But the UV’s are severely lacking in terms of artistic precision, and control with fast and efficient algorithms, and ways to accurately remove distortions at the base level. I think It’s a big oversight from the developers not really noticing this glaring elephant in the room that will affect whatever the outcome of work that will be done for the “Texturing” side of this revamp. Is the UV side being considered at all in terms of a makeover, or a massive leap of improvement as well?

(Sorry to derail the topic of discussion, but this side of Blender seems quite cryptic when it comes to what discussions are being held, and potential professional artist(s) can look forward to in the future.)

I wrote about something related in one of my posts above:

I think your issue is that you do not distinguish these two different types of baking, and hence you are trying to come up with one universal chimera solution which would work for both input and output baking. I don’t think that will ever work out in a way that doesn’t severely compromise on UX.

I think Blender should distinguish these two types of workflow and facilitate individual solutions for both.

Baking curvature map to drive procedural wear and tear effects is not the same as baking final PBR texture set for game engine for example. Both of these steps, which happen at different points along the workflow timeline need a tailored solution.

1 Like

Especially since during production (in games very frequently) meshes and even UV Maps may get updatet or changed and need to be updated over and over. In that case the users need to be able to take care of the Input textures separately and easily. This will recompute the updated material graph because the driver texture inputs changed.
I think Blender has the potential to become a real powerhouse in this regard, though, since most other software in the field is specialized in texturing and thus doesn’t have the ability to also track updates made to meshes as easily as Blender might.
If we find a way to incorporate a UV independent, easy texture projection method some time later along the lines then this can potentially be really huge. I’m excited. :smiley:

1 Like

@Doowah, there are many things related to texturing that need improvement, but we can’t do everything at the same time, and discussion about UV tools is off topic here.

3 Likes

There definitely is a distinction between input and output textures, we’re aware that those need to be handled differently in the design to some extent.

I still think a single Texture Channels node + single Image datablock together can hold all the information about where to store both types of textures, but there must be separate operators to update the cache of input textures and to bake output textures. Not sure if you would call that a different workflow, or if there really is a need to handle them entirely differently.

Yes, I know you are aware. My response was mainly to @DanielBystedt

I saw that your proposal covers everything quite in depth, so I am quite sure you’ve already taken that into consideration, and you will ultimately make a right call.

The reason I posted was just to back that idea up. @DanielBystedt has a reputation of power user and influencer when it comes to Blender, but I don’t think his suggestions here work out that well.

For that the usual solution is to support a lower resolution Quick Bake. I can’t see any reason why you’d want a partial bake - it’s of no use to bake the Diffuse (color) channel but not the normal.

  • Most/all of the time you want to bake at twice the resolution of your output textures, @brecht note that this use case must be supported, that the baking resolution is different from the output. The reason is so you have plenty of working pixels for your manipulations.
  • Supporting arbitrary bake and output resolutions is fine, but the default easy case should be for powers of two (512, 1024 and 2048 being most common). For technical reasons game engines need to have textures of these sizes. So a 2048x2048 output should have a 4096x4096 bake. Larger than 2048 outputs should be supported, however GPU RAM usage goes up exponentially past 2048 so that’s less common
  • Easy get/set of texel density (UV editor hooking into the textures here) should be possible in the UV editor.
  • As mentioned above a ‘quick bake’ default should be possible. Blender baking is quite slow compared to other DCC’s (Marmoset being best in class). So for example, baking at 512 temporarily should be possible for quick checks before doing a final bake. Having the quick bake be automatic should be possible (don’t need to initiate it).

On UV editor, Brecht is right this discussion should stick to Textures. However UV’s won’t go unnoticed because they are hand-in-hand with texturing. The Summer of Code student from last year is still doing some work here - there was a recent check in for example adding Edge selection - perhaps that student could be given a grant for the summer time to continue his UV work. That would be an excellent synchronization with the texturing work.

Hello everyone. As a texture artist this discussion is excites me most.

I really like @DanielBystedt layer stack mockup, and i want to try to make my own version of it.

So introducing the brand new Layer Stack Node!

Which can be explained as follow :
1). We have the image at the top part of the slot which can be named.
2). The lower part of the slot is Alpha as the name suggest will contains alpha channel. It also shows the Blending Mode and at the same spot we can also adjust the opacity.
3). When new texture connected, another slot will automatically added on top of it, the node size also will expanded to top. When both of the link disconnected (color and alpha), that slot will automatically disappear. The node size will also reduced.
4). We can adjust the position of the layer slot with grab button on the right.
5). With the yellow dropdown menu we can choose the blending mode of the image.
6). The checkbox beside Alpha is the preserve alpha channel when drawing functionality when in Texturing Paint Mode.

As for baking, my idea is based on the pipeline that my current studio use. So usually I need textures for diffuse, glossiness, specular, illumination and bump from Blender and set this in shader with Maya later. You can do a “regular” Bake like AO, Diffuse etc. But for this example i only want to bake diffuse and bump. So this is what i would imagine my bake workflow will be :

1). Right click on the node(s) i want and Toggle Mark as Bake, and it will shows pink border on it.

2). In the Bake Type of Bake tab option, choose the new option “Marked Shader Node”. (By the way, this is the same result as using “Emit” option with the current version of Blender, with some workaround and the help of Node Wrangler add-on.)

3). Choose the Resolution, the folder and file format. Click Bake, and it will create two baked textures using the same name as the node.

image

If Blender detect it, the same goes to the UDIM files as well.

image

In addition to that, if in the future developers agree with this, i think it is also important that Blender can bake the texture into native Krita file, I believe this will also encourage people to use open source more as well, at least try to download and playing around with it. Also because for many reasons, some studio has a mandatory .psd file to exchange data between texture artist that use different applications (in my studio we have artist that use Substance Painter, 3D coat, Zbrush or even directly in Photoshop). With exporting to Krita at least we can provide the layered format easily and export it to .psd in Krita later. For me this is one of many ways to challenge the .psd hegemony in texturing department :smiley:

The option Keep Layers as the name suggest is to keep Layers in the Layer Stack Node. And Convert Nodes as Filter Layer means that beyond the Layer Stack, any node that alter the appearance of the texture will be converted into Filter Layer in Krita… But after i think of it, somehow i don’t think we need this two options, because why would i want to convert to .kra if i don’t take the non-destructive workflow from Blender to Krita? What do you guys think? Should the two options be automatically included?

image

This is the bake to Krita files result.

image

This is if the Blender detect UDIM files.

This is the layers result if we open RGB Curve.001.kra file. Each slot of Image and Alpha in the Layer Stack node are merged into one layer in Krita. The RGB Curve.001 and Hue Saturation Value nodes are converted into Filter Layers, with the layer order following the flow in the Blender Shader Editor.

Also a little side note of MixRGB node that’s bugging me for a long time. Don’t you guys think the order of the top and bottom layer should be reversed so it will be more make sense?

Let me know what you think. Cheers!

6 Likes

Would it make sense to have more meaningful input names for the mix node, like ‘Foreground’ and ‘Background’ instead of Color (2|1)?

As far as I know, pbr bundles are layered instead of plain colors.
Your layering node looks like a special case of a linear blend node with arbitrary number of inputs. One can do same chaining mix nodes. But a layering node would ensure that its purpose remains clean.

I am glad to see that texturing and texture synth became a focus.

4 Likes

For my layering node, I think it’s similar to shader. We can chain as many as BSDF shader nodes we want but it’s more simple with one Principle BSDF.

I think 'Front and ‘Back’ is a lot shorter. But i’m just curious though, why the initial design of mixRGB want to put the ‘Front’ side at the bottom? To me it seems like defying gravity.

Things being listed from 0 to 1 and from top to bottom is a general concept in Blender (and other software) -the case of layers is actually an exception because they’re conceptually… well, “layered” onto one another so having them in that order corresponds with the mental image of a stack of tracing paper. It makes sense, but it would introduce quite an inconsistency in the way most nodes work. Maybe it’s worth reversing it for the specific case of layer blending, though ?

1 Like

Thanks for the mockups. These don’t really solve the same problems as the proposed design though, like setting up bakes per material for easy re-baking, a high level layer stack UI and layering of multiple texture channels?

For transferring a texture node graph to a layer stack in a 2D image application, what’s the use case for that? I could for example imagine that for painting the textures that go into bake, it would be nice to switch back and forth between Krita and Blender. But for baked output it’s not so obvious.

5 Likes

In my work with game art, I think that depends on from which tool you do the final export. If you are mixing a bunch of procedural, externally baked and hand authored files, you would want to quickly chang F.E. AO intensity for a particular body part.
If Blender is your final station before the flattened tga/png/exr, you do the adjustment right here. You don’t use the exported layer stack. But if your (pre existing) pipeline uses a 2d software as the final step/format (some game engines/users even link .psd files directly in their game content), then that layer stack export/sync would be very nice and useful.
For complicated assets, IMHO I think you don’t want to use a 2d image program like the latter example. A powerful procedural/mixed pipeline will be developed/maintained in Blender (or currently other tools) and bakes out the final result. Exporting assets from your pipeline with the idea of still enabling quick tweaks in a layer file, just begs for de-synchronized changes. But a more linear or non standard pipeline could make great use of this, if a 3D texture artist just delivers layer files for final lineart work by 2d artists.
An addon could maybe also export the layer representation inside Blender, but of course it would be slow and could more easily unintentionally break if the Blender layer stack might move forward in a design that doesn’t map to any 2d image editors anymore.

Why not take the best of both world of Procedural/Node Tree and Layer/Stack into the Shader Editor.

Node Tree is reusable and modular, but node can become hard to decipher if too complex. While Layer (Stack) is more straight forward and quite easy to organize, but in some cases it’s difficult to make a procedural texture.

I propose to utilize Krita image format as the new node in shader editor. Why Krita? Several reasons :
1). Because it’s open source obviously.
2). We can open and edit a reserved non-destructive images.
3). Possibility to work back and forth between Krita and Blender seamlessly.
4). Krita has a more mature image authoring tools.
5). We can also export it to .psd format via Krita if your production demand it.
6). But the most important part is, it will encourage more people to use the open source applications :smiley:

I can’t think of better name than Krita Node since it will open a specific .kra format. So what we see below is basically a carbon copy of Krita’s Layer window with some adjustments. You can add various kind of Layers, but one thing particularly different is the addition of Node Layer.

Here we see two kinds of Layer to store images, The Paint Layer is where we can draw directly or drag and drop an image from file explorer into it, and Node Layer (with the node icon beside it) which can be double clicked (or use the Tab button) and open it ala Group Node to see the node network. It will act just like the group node to reduce complexity and to organize easily. This Node Layer can also shows the Group Input and Output parameter, so user can tweak it without having to open the node. But i’m not sure where to put it, perhaps the layer is collapsible?

For Node Layer, if it’s too complex for Krita to read, perhaps Node Layer will be muted or “Bake” temporarily in .kra file, but still maintain the node information. We can do this treatment, except for the Filter Layer status of the Node Layer.

Another thing about Node Layer is that it can change the status to Filter Layer, you can see some of the layer with F (Filter) Mark and FM (Filter Mask) Mark, like the way Filter Layer works in Krita. The reason for that is because Blender has it’s own special Nodes that work exactly for this purpose (i imagine this Filter Layer will only limited to one special/effect node). This means that Krita will have the ability to convert this Blender’s node into it’s own Filter Layer and vice versa.

Another thing is the Blending Mode should be seamless and identical between Blender and Krita, i have tried several times to arrange images manually by replicating Blender node flow into Krita Layers to make .psd file, the Blending Mode gives me different result and distorted color. Even the parameters are quite different at the moment.

In Image Editor, the same Layering Format will appears in properties tab if Krita image file opened.

Since Node Layer is reusable, we can also drag and drop a specific Node Layer into Asset Browser. Or maybe right click Mark as Asset.

The same workflow also applied to the other channels. Here we see the Roughness.kra file and it’s own layers is used as the Roughness Color. I imagine perhaps it is better if the individual Paint Layer can also becomes Node which can be reused (cloned or instanced) by drag and dropping to another Krita Node. So, for example, when we select one Paint Layer 2 on Diffuse.kra, another clone of Paint Layer 2 will automatically selected on Roughness.kra. This is useful for PBR texturing workflow.

So some of us might ask, what if the UV changed and we need to adjust the texture to the new UV layout? Which is pretty common scenario in texturing department. That’s where the new UV Map Node come in handy… With a click of a button we can change by re-baking the whole Paint Layer images into the new UV.

Layer Texture Proposal 08

For the Bake workflow i propose using the Collection of Render Types that each one of it has it’s own render parameters. The reason for this is because for some productions the requirement for map textures are not uniform, for example diffuse can have 4K size and 16 bit tif file, while specular can have 2K with 8 bit tif file.

At the very bottom of the Bake tab, user can add as many Bake Type as he/she want. It would be nice also if user can make their own Bake template.

Overall, if we do this, i think this is going to be the greatest Open Source Universe merger of all time :smiley:

11 Likes

Instead of using the krita image format in your proposal I would suggest replacing that with the OpenRaster format (was proposed by people from krita).

Building this around a layer stack designed to be compatible with 2D image editing apps is too limiting. For PBR texturing you really need layering of multiple channels at once, node graphs and many types of nodes that have no equivalent in 2D image editing apps.

14 Likes

@brecht Question: Have you personally by chance ever used “Substance Painter & Designer” or “Mari”?