GSOC: Texture Node Editor :Discussion and Suggestion

Hello I am Jishan Singh I am preparing proposal for redesigning Texture node editor as it has been depreciated for quite a while
here’s Design i think
we combine shader and texture editor, every color result can be rendered in texture
like Texture Output node can be used in material tab and node tree will be copied


texture output can be used in material tab but there will be different tab for texture here

Edit: There will be two types of textures
OutTex → for image output from nodetree and used in modifiers, brush texture
ImgTex → image texture used as canvas in texture paint, input in material and texture nodes
also OutTex will be fixed in size
Edit: just like ImgTex, OutTex size will be defined by user
Screenshot from 2021-03-21 18-01-22
i m confused that if we keep only Image Texture (like discussed here)
question 1:
the output from nodes could overwrite texture paint, we could make some kind of lock for nodes or paint use but will it be practical?
or we keep two types of textures and output from nodes will not work on ImgTex (much practical)

Edit: i have abandoned this project, limitation faced are

  1. if size of texture is fixed, it will result in tiling (in case of procedural textures too) while currently it doesn’t face this issue
  2. converting a 2D texture to 3D like procedural one’s
13 Likes

Maybe the interesting thing would be to mix texture editor, shader editor and bake in the same element. Although the fact that this separate is more useful because the texture editor can be very useful in the future to be able to be used in geometry nodes. As well as the fact that being able to edit a texture has to depend on the shader editor is rare.

1 Like

I’m just a user so take this for what it is but…

Node based rasterization systems and shader systems are, far as I know, fundamentally different. Shaders are evaluated from the “Material Output Node” going through the tree. This is why something like “Image Texture- Blur Node- Principled BSDF- Material Output” is very hard to do. Reconstructing the image from full blur is impossible. This comes up every once in awhile when new people who have used Substance Designer, or come from some other background, start using Blender and ask why there is no blur node.
Texture nodes ARE what would fix this, though. Having separate Texture/2d Raster/Image -Node system that is evaluated from the inputs to output would allow all kinds of destructive operations.
In grand scheme of things these ‘Texture node’ graphs would then generate a texture file that could be fed either into shader graph as “Image Texture”, into modifiers (such as displacement), dynamic paint use cases, and in the future of course Geometry Nodes.

note about image texture

or instead of an “Image Texture”-node a new node since if it is possible to keep the two systems synced it would be ideal to be able to expose parameters from the texture node graph into the shader node graph and then have those exposed in the final material parameters)

That’s pretty massive project, though, and I feel like due to how interlinked system like this would be it probably wouldn’t make a great GSOC project. Then again maybe just refactoring texture editor to have base set of nodes and spit out image textures could work.

As for your proposal I’m averse to the idea of just merging the two systems, and your language is pretty vague so it is hard to make very concrete comments.

Don’t like the idea.

This is bit oddly worded but I assume you mean that graphs can be rendered into textures via the ‘Image Output’ node which would align with the purpose of texture nodes.

This is very vague does material tab refer to “Properties Editor- Material Properties” or Shader Editor and then having the Image Output result in the shader graph.
Does different tab mean Textures would have their own sub-menu under the “Properties Editor- Material Properties” or that we would add new tab to “Properties Editor” called “Texture Properties” presumably under the “Material Properties”

Would this be defined for the “Image Output” node? One of the benefits of node trees is that they are easy to re-evaluate to change things. While working on your nodes it might be preferable to have low resolution for fast interaction and then turning to high resolution for shipping out the final. Ideally you would have texture size setting per Node, Node Graph, and Image Output instance.
-Having individual node size control is both an art direction and optimization feature. Art wise having the control over individual nodes resolution allows for pixel art effects. Optimization wise if your end goal for the texture is 2k but the end result looks the same even if your noises and some other nodes are evaluated at 512… well that’s all the better.
-If there was no control for the whole graph people would whine about it being annoying to set resolution for each node separately. Design wise you’d probably want nodes to inherit graph resolution by default and then allow individual nodes to override this inheritance if the user so chooses.
-Having control over each instance of the Image Output/Texture would be beneficial for reusability. If I have moss texture I’m using in both my focal point assets material and for tree 10 meters further in the background being able to use different resolutions is obviously beneficial.
That implodes this gsoc projects size again, though, so maybe minimum viable product would be being able to set the resolution per Image Output/Node Graph.

Note about individual nodes

Note that depending on the end goal of the system this is more or less important. In some other vendors products one of the core features is to have the node trees available in other end products like the render engine or game engine. Then parameters can be edited there and the node graph can be re-evaluated. Parameters could even be animated over time. This kind of use encourages having as much optimization as possible. Blender is unlikely to make any node trees accessible outside of the package any time soon. However, Blender has plenty of internal use cases which in my opinion encourages investing in efficiency. (one example would be using end result as brush alpha for paint/sculpt)

This seems to be mainly about preview functionality missing in Blender shader nodes. Not sure how that is super relevant to texture nodes themselves. (For image based texture nodes previews are pretty simple since you just look at the generated image? But then I don’t know how exactly you plan to implement your nodes)

Again having pretty hard time deciphering this. Strictly speaking I’m not sure why you’d want to override hand painted texture if there is no blending options. You might as well plug the generated texture directly into whatever the handpainted one was plugged into? If you could for example blend hand made noise or dirt texture over the hand painted texture on multiply or such I’d see some use in that.
In general terms I can see 3 very big use cases for texture node generated images in painting mode. However, all of these more or less require further development in painting too so.
-Base, using generated brick wall (wood planks, beach sand, rock surface…) as the base of your texture painting would allow to get started very quickly but still let you refine it to have that hand made feeling with painted details.
-Repeating element, David Revoy on Krita side recently made this video and this would be more or less useful in Blender too. Also this doesn’t have to be something in perspective. It could be procedural skin pores/freckles for example.
-Tool, as mentioned before generating alphas for brushes. Stencils/Stamps. Decal elements.

Implementation wise since Blender doesn’t really have a layer system for texture painting ‘Repeating Elements’ would probably be the hardest to implement since that would really requires layers, blending modes and masking. Currently there are add-ons and workflows where you can fake layers by having multiple texture paint files, merging them in shader nodes, and so on. But it is quite cumbersome workflow.
Having texture nodes work as tools and base image could be feasible.

As a last note you don’t mention who you think would mentor this project. You should probably figure that out as soon as possible. It is a big project regardless of what route you take there should be design guidance from someone who understands blender internals from the get go, I would imagine. Also were it to become relevant to have some feature parity with the original texture node system it might affect who the mentor is. (For them to have some knowledge of the original system. Probably at this point feature parity isn’t going to be an issue since the texture node editor has been gone for so long. But from the point of view of the system supporting sending textures to modifiers, brushes, and so on this would probably be a consideration if this project was chosen to replace the old system. For example even right now in 2.92 if you make a texture, enable texture nodes, that will still affect the brush if you apply that texture as the brush Texture or Texture Mask)

Anyhow Good Luck hope you’ll get picked, find a good project and mentor, and learn a bunch!

5 Likes

You may find the brief mention of texture nodes in this discussion interesting: T85655: Attribute Processor for UX improvement

2 Likes

I think this would be the best way to go about it, but it does sounds like a huge project. A separate texture editor whose nodes are evaluated just once and outputs maps that can be used in the shader editor would be ideal to me.

2 Likes

yeah we have to change the texture inputs in modifier, texture brush etc. it is part of proposal too.

i made the edit, i admit it was vaugue :sweat_smile:

We can add button for texture render and option for auto update when size of texture is locked
Image output node will have option for setting size of texture and can lock the size

I am planning on size of texture only in Image output for now

i updated the link

Note the relevant task for the texture nodes redesign is this:
https://developer.blender.org/T54656

There is value in texture generation into image buffers, but most of the places that currently use texture nodes like modifiers, sculpting and texture painting need 3D textures that can be sampled at arbitrary points. So for it to be a replacement, that must work, and output to image textures can’t be a fundamental assumption in the design.

The Summer of Code timeline has been shortened this year, and even if it was longer doing a full redesign would be too much work. For a successful proposal the scope should be limited. I suggest to implement an initial version of the new system and nodes that integrates with the rest of Blender in the same way as the old system, rather than trying to change both the underlying texture evaluation and workflow.

6 Likes

I m not planning on writing to ImgTex (although it shows in ss) Output of node will be to OutTex which will be used in modifier, sculpting etc and it will be 3d.
my proposal focuses on creating independent renderer for Texture nodes(on gpu), replacing the oldTex(currently used in tex node editor) in modifers, painting,sculpting.

I m planning on not touching imgTex, modifying old texture type to evaluate on gpu and have nodetree
also i wanna ask will it be feasible

texture whose size is variable and in texture painting nodetree will be evaluated too often

I’m not sure what the abbreviations you are using stand for exactly. It would be easier to understand if you use the same terminology as Blender (node, datablock, etc) and spell things out fully.

GPU acceleration seems like it would make this project too complicated, and there would be performance issues due to latency as explained in T54656.

1 Like

The most important part, which already seems to be taken into account is that there should not be any texture editor. There should be just the shader editor with the ability to output RGB/Float type node outputs as a texture datablock. I know it’s already planned, but I will elaborate why this is so important:

Here’s a typical example workflow, which is currently very difficult to achieve and very limited:

One of the common modeling workflows is to displace meshes with displacement map, then use the same UVs to also map the textures on the mesh, and then decimate the mesh. This makes it very easy to rapidly, semi procedurally generate various assets, such as cliffs, and even make them game engine ready.

You will start with a simple shape, like a cube:

Which you can then very roughly edit to the desired shape of the object, in this case stone wall architecture:

You will then remesh the object:

Add a bit of subdivision:

Now, this workflow has been impossible up until recently, thanks to this patch:
https://developer.blender.org/rB1668f883fbe555bebf992bc77fd1c78203b23b14

You can now apply the UV project modifier, and use it to generate box like mapping:

Unfortunately, the workflow is still very painful, as one has to create quite ugly viewport contraption to contain individual projection helpers:
image
image

Then you can use displace modifier to displace the mesh with the generate UV coords:

And finally, you can throw on a decimate modifier, and turn it into a game ready asset:

You can then, any time, go back to edit mode, edit the architecture, and have procedurally generated lowpoly asset at the end:


The problem is the clumsy to set up, ugly modifier stack which has very limited UV mapping capabilities:

Now, this is how this all relates to Texture Nodes:
We should be able to simply create a material, with all the UV mapping and color processing intricacies and so on, which the shader editor allows, and then simply be able to branch it off at any point in the shader editor into an image texture, and use that in the displace modifier for example.

And then have that exact same shader editor UV mapping tree drive both material shading as well as the displacement mapping. This would open a huge world of procedural modeling possibilities in Blender. Those which other 3D software packages have for quite a while, but Blender has not.

Even something as seemingly simple as being able to use any RGB/Float/Vector node output anywhere in the Shader Editor as input of Displace modifier is a game changer.

In fact, Displace modifier offers “RGB to XYZ” as one of the modes, which combined with the power of Shader Editor would be even more powerful.

I really hope we won’t go the overcomplicated, schizophrenic way of having a separate texture node editor, detached from the shader editor, which will actually separate shading and procedural modeling further away, instead of bringing it closer together into one unified system.

9 Likes

i was using these abbreviations

i m only proposing initial version, no object geometry outputs (could be possible in future) only simple nodes like in texture node editor
main focus is to create an independent renderer and integrate these nodes replacing older system.
Also making it backward compatible.

2 Likes

Bump! How did the proposal go? Did you make any progress?

Just to repeat the obvious, this is an editor with nearly no use nor function at the moment, but it has huuuuuuuge potential that could tie in procedural brushes for texture painting, sculpting and other methods of painting, help aid procedural material systems, help work attributes for the GN by compartmentalizing textures, help create procedural textures for geometry displacement modifiers and other modifier that require textures and more. It even had previews! Which is a feature the rest of the node editors could benefit from.

I hope this editor is fixed, tied back in, and no-longer a broken dead weight in the software.We’ve gone two major point releases without it. 2.8, 2.9, and now 3.0 coming soon.

So… this is my vote, and my teams vote… to revive this great feature from the 2.79 days.

Or… alternative vote to clean up the code and remove it completely. Tie up the loose ends.

5 Likes

Not necessarily, I personally use them in the compositor.

Could we think of the texture node editor as a “field editor”?

Blender does have several ‘special cases’ which are just fields. (force fields, brushes, textures, and “Graph Editor’s” modifiers).

A field editor could unify all these special cases.

2 Likes

Isn’t this two entirely seperate tasks altogether then?

a) a way to allow node trees with function nodes (?) to be used where we currently have the red texture nodes procedural noises

b) an entirely new editor / system for procedural texture manipulation on 2D and maybe 3D arrays of data? Maybe there is some overlap with the compositor?

I think it’s one task that depends on another task to be done first, not entirely separate.

I have a lot of questions to the author of the topic.
most of it was voiced by 3Rton.
But the main thing: the texture is initially an object, the processing of which can be arbitrarily complex due to the processor.
It can be like geometry that has a lot of modifiers applied to it.
Yes, the video card is good, but also difficult. It just doesn’t make sense. From the outside, your idea looks like a simplification of the BAKE shader

Edit: i have abandoned this project,

Why does people still respond to this post since the author abandoned the project ?