Spline info in the shading area is a great idea. What about an input from the compositor itself just like image textures? I would find it very helpful to be able to use masking along with other things inside the shading network. Thank you and good luck with your project!
I think there should be a good solution for creating procedural scratches. By that I mean longer, single, lines that seem to follow the surfaces (I’m not sure how feasible the latter is).
While effects like edges with lots of small scratches can be easily created using the current noise textures, this method breaks down when trying to create long single ones because you would have to find a way to mask out your scratch. Maybe I’m jumping to conclusions too quickly, but I haven’t seen a good, procedural solution for that in blender yet.
I have no expertise in maths, so I’m not sure how feasible this all is…
Your idea about adding 4D noises is fantastic! Being able to loop 2D noise patterns and evolve 3D noise patterns sounds fantastic, especially for the mograph community.
Replacing perlin noise by opensimplex sounds good too, but please make sure that the scale settings between both roughly match (for compatibility reasons)
Other things that would come in pretty handy:
Adding color input sockets to the color ramp node
I don’t know what the current situation on the asset manager etc is, but managing node groups for tiling, blur and texture displacement is pretty cumbersome. Maybe there is a way to present them to the user as a node, but handle them as a node group internally?
I think adding triangle and hexagon patterns to the checkerboard node or to the brick node would be nice for creating futuristic scenes and bathrooms
There should probably be a seperate node for vertex color input instead of the Attribute node
I see there will be some improvements to spline texturing. I wonder if adding support for curve uv transform mapping could be in scope in this project? This blender limitation was decribed here.
I know in it is not directly related to shader nodes, but still it would make texturing curve way easier.
Let’s say I have an UV mapped image, and I somehow want to move that resulting mapped texture in Camera coordinates, to fake a 2d edge. Or that I have a texture mapped in uv space and I want to add a noise to the vector to fake a Blur, but in global space, so it always has a constant blur radius. Idk how that would be possible with current shading nodes, or if it’s possible at all. But that concept of having a coordinate system itself be mapped to another coordinate system seems useful.
The fact that textures require a vector coming before them is also limiting for making nodegroups that act as filters, like blur, parallax, custom normal mapping… If you want to use that filter with other textures, you have to duplicate the datablock and go inside to change the texture. Textures cannot be inputs of these filters, coming from outside the nodegroup in, because the filter IS the vector, and the vector has to come before the texture. There the concept of having a mapped texture, and then be mapped again after the fact, could be reused again.
Basically, being able to move a patch of color that you see onscreen, from any input, to another part of the screen using any other coordinates AFTER the fact, or at least being easier, that would be great. Maybe it would require pixel displacement. I hope I’m not explaining myself badly and this makes sense
This may seem like a trivially obvious point, but it seems like the single most common use case for procedural texturing is for the fast creation of realistic materials, preferably without need for UV unwrapping or manual texture painting. Most of the time this means static (i.e. non-animated) materials that attempt to emulate some kind of surface imperfections or variations - scratches, grease, dents, mould, fabrics, stucco textures, etc.
This means that (with some exceptions) procedural textures that look too uniform or too artificial will be of limited use to most users, either because they are incapable of producing natural-looking patterns, or because they require too much manual tweaking to do so.
I am making this point because I think it is important not to get too caught up in the purely mathematical side of the process - a cool noise that just looks like cool noise is of less practical use (for most users) than a noise that can actually be used to emulate something in the real world, be it marble, clay, or chipped paint.
my $0.02 : I wish Blender had better upstream node caching during interactive slider editing for better interactivity. for small trees it’s great, but when slider dragging in a largish tree, the perf gets pretty bad. Because of this I avoid the sliders and end up typing in the values instead.
the old compositing software from softimage called eddy used to have a little color coded progress bars on each node to show if a node were being read from a cache or evaluated. if meant you could easily cache all upstream nodes at the input connection and get an interactive slider drag with only the downstream nodes being evaluated. if you were editing higher up in the tree you could watch the nodes switch color while waiting for a viewport update.
editing nodes now has that same delay, but I never know if the tree is done evaluating completely or not.
Not sure how feasible each of the following are but they would definitely be useful:
Blurring / Sharpening of textures
Access to previous / next pixels to apply proper kernel operations
Randomized circular gradient spots (like inverted voronoi without overlap)
Procedural scratch / line noise (since stretched noise never looks good in all dimensions).
Fractal tree-like procedural noise.
Allow for editing of nodes that aren’t used in output without attempting to re-render scene.
It might be too much for a summer but I would love to see a curvature shader to identify concave and convex features of an object. The bevel shader might be a good start for that.
Also a more advanced ambient occlusion shader where one could specify a direction for leaking effects would help a lot.
Multi-Image Node: Load a folder of images and spread them over objects / mesh islands. A good example is MultiTexture Map (MultiTexture - CG-Source). It could double as a node that’s able to randomize color / values by mesh island.
Use Vertex Weight in Attribute node: Currently only Vertex Color is supported, would be great to have Vertex Weights supported as well.
Texture Unification: This is probably outside of the scope of this project, but figured I’d mention it anyway. Replace current texture solutions with one shared texture system. That way textures can be used in modifiers and shading at the same time. This allows for complex interaction between procedural modeling and shading.
What would be valuable is the ability to create your own expressions. This could be a node that allows text input and the creation of your own in- and outputs. You could then input for example: outputA = inputA * pow(inputB, 2) + 1
This would of course be restricted to the existing Math operations. This way instead of creating huge networks of Math Nodes, a single Node with an expression would be enough.
Adding in and Outputs could work like it currently works with NodeGroups - in the Sidebar (with your additional functionality to controll the type).
I have some suggestions, let me know what you think about them. (English isn’t my native language thank for the consideration & your hard time reading it )
“ReMap Node” I don’t mean the one or similar to the one that already exists, I want to be able to use vector manipulation in the middle of the node tree and not just at his start. For this need I suggest a node that will contain two sockets, color and vector, this node will take the texture that has been generated and kind of change it to be a new texture with a default “generated” mapping. Until now the way to accomplish that effect is making a node group, but in my opinion, my way will create a more efficient and flexible node tree.
“Numerical View Node”, We already have the View Node that converts any kind of information to a readable one by using the emission node. What I suggest is a View Node that will take the round of the input texture and write its Value upon the mesh in the viewport. The problem is that it’s not so informative about the exact Value, so I suggest the Node will have a pixelated option in which you can control how small each pixel is and for each of them the viewport will write its value. I have been posted this In RCS but due to my poor English, I think nobody gets what I suggest (I know, Kind of sour loser, anyway I still have the mock-up)