Named Attribute Nodes in Blender 3.0

Here’s an interesting one:

We already have a long list of pros and cons regarding Get/Set (Named Attributes) which is overwhelming in favor of it being added (added back?).

I don’t think it’s even an issue of “if”, but “when”, since even Dalai mentioned in his post, that they are considering this, it’s just a matter of time & deadlines (Last 2 sentences are important):

What I’m aiming to say here: at this point, we should try to give strong reasons why it should be added in 3.0 instead of 3.1 (“when”), because the other battle is long over, (the “if”). Or at least I hope…

This is (kind of) hinted here:

Also, I think I understand why the huge focus on “share-ability”. For most of us (single user) it’s a smaller concern, compared to a team (multiple users). And remember that the development focus is for Blender Studio. They probably rely heavily on this concept.

That being said, I still agree with this statement:


This has been the pitfall of most of the recent controversial decisions. Blender Animation studio is a flawed benchmark of production needs. The original Attribute based GN design (which they had to backtrack from) was also supposedly modeled on their needs, and the reason we still have dashed lines is partly because the feedback on them from animation studio was “mixed”.

Overall, the illusion that the Blender development is somehow “community driven” or “community oriented” started to fall apart very rapidly recently.

Blender seems to be going back into the old pre-2.6 pattern of design:

  1. Find a small issue
  2. Artificially turn it into a big problem
  3. Invent monstrous solution which does more harm than good to address it
  4. Implement such solution

It’s been that way from the day one, where relatively minor issue of selection/action separation was addressed with monstrous solution of using right mouse button to select things. A solution which did way more harm than good to Blender in coming years, and became one of the major obstacles for the widespread adoption.

Same happened with datablock management, where a trivial task of deciding which datablocks should be cleaned up resulted in monstrous fake user workflow.

Or of course more recently, a minor issue of distinguishing node connection types, already partially addressed by the node socket shapes received a monstrous solutions of drawing some node connections with dashed lines.

And now, relatively minor issue of shareability and name space conflicts, which can be addressed in numerous ways will most likely be addressed by a monstrous solution which will once again hurt Blender in months, perhaps even years to come.


My main concern involves share-ability because I’m in charge of producing our in-house tools to work better and faster, so I understand this concern, but since I’m conscious I have to properly create the tools, but I don’t want the tool to be limited.

For example, because of pipeline reasons I may want to have an attribute that stores some studio-specific information that I don’t want a user to change from the UI at all, that information must not be visible, nor touchable, from the UI, without internal named attributes I cannot do that AFAIK

And yes, I totally agree that this must go to 3.0, this is because 3.0 will be the first release where Geometry Nodes will be presented as a grow up tool as it is right now, and that’s a fundamental part of the workflow that is being removed, people will learn how to use GN now, so this, being a basic part of the optional workflow, should be in 3.0


It doesn’t have to be this drastic. The community is more emotionally involved and in general has more users which is why the community can react fast to a change or a topic.

The teams from Blender side is smaller in comparison and it being an organization means that they inherently react slower (because chain of command, the need to be organized, formalities, documentation etc.)

So it’s kind of unfair to make such a statement.

The community is however highly involved and people care about what happens. So as long as that will continue, then things tend to stabilize in the end.


I did major part of UI/UX design for one of the most successful commercial rendering engines today. And that engine succeeded from significant part because of the UI/UX as that’s what separated it from the others on the market at that time.

I can not even remotely imagine ignoring such widespread controversial feedback as long as I was in charge of that aspect of the software. Even if the minority of users were dissatisfied with some new features or changes to the existing ones, there was always a way to iterate on the design to satisfy at the very least 85%+ of users. Never once was it the case that such iteration made the design worse, it always turned out only better.

If 50% or more of the public userbase would be dissatisfied with a design, I’d definitely consider it a failure and it would not even once cross my mind to let such design make it into a final product in unaltered state.


Important news

@Ton spoke

I am the person behind the decision and take responsibility for it. Good design is about thinking in restrictions in the first place. It’s not to frustrate artists but to create tools and habits with future-proof designs we can work with for another decade.

Further we carefully listen to feedback and are open to learn. My strategy is to do things first really good (strictly according design), then explore the options of this design space well (proof of work), and then allow users freedom to hack around with it (on top of it).

Personal opinion:

“future-proof designs we can work with for another decade.”

That is precisely why i am so worried, there are important arguments in the recap that are proving that this is not a future-proof idea :slightly_smiling_face:


Yes, we are all on the same page.
It’s Ton’s duty to take the responsibility for the decision, which is why he took it.
But it’s the team that can convince (him) of alternative solutions, since they have the technical understanding.

At the same time, I wouldn’t want this to become a long process (years instead of weeks).
I have mixt feelings about this. Don’t really know how to react…

It’s Ton so I will probably get flak for merely disagreeing with him, but still, I couldn’t disagree more with this phrase:
Good design is about thinking in restrictions in the first place.

My experience has been that if the design doesn’t consider the true, final vision of the tool, but just a limited version of it, both the development process and the final product become much worse.

If you finish a version of the tool with a limited design, which doesn’t cover use cases which the tool is expected to be able to handle in the future, then at some point during expansion of the tool use cases, you may come across many limitations which will either require complete fundamental change of the tool architecture, which will take a lot of time and development resources, or some hack which will drastically reduce capabilities of these new features that enable the new use cases.

If you approach the design with the true vision of what the final product is, instead of some limited intermediate step, chances of laying out the good foundation increase significantly.

Almost feels like they’ve became hostages of their frequent, regular release schedule :expressionless:

To TL;DR this perhaps chaotic post - Good design should not be about thinking in restrictions in the first place. The very first place should always be vision of what the final future product should be. And there needs to be at least one person with such vision.


I think I can understand the reason why the geometry node team prefer connections over named attributes, the use of named attributes can make the graph looks “broken” in consideration of the way field works. But the problem with attributes is that it’s a so fundemental thing for proceduralism that you cannot just get rid of it totally or tell people not to use it.

Using connections can make the graph looks more straightforward in some cases, but in some other cases, it also make things more complicated and nearly impossible to accomplish.

For example, to create something like a hair system, you instance a bunch of polylines on a point cloud and then you try to add some noise effect to the lines that the strength grows from roots to the tips while each line still have a different seed. The easiest way for this is to create a ramp attribute based on the index and save it on the polyline before instancing, then apply the deformation based on the attribute value after. But this kind of task is hard to achieve just with connections (you can do it though, just not get the easiness and precision).

So from the view of an asset builder, it’s better to have attribute get/set nodes directly in the graph, the current design may be enough to solve immediate problems, but when the nodes get more and more complex (like a physical solver) someday, it’s gonna be a nightmare to manage hundreds of attributes via explicit sockets. By all means, someone need to use attributes can use workaround like breaking the whole graph into multiple geometry-node modifiers or even multiple objects, at the cost of readability of course, so yeah, this restrition cannot elimiate the use of attributes and only make attributes hard to use where it’s really needed.


Also, most sharing of procedural tools like GN trees would happen within a studio team.
It’s definitely that way with other well known procedural DCCs. A team will always set
naming conventions etc to prevent issues with their own pipeline. That’s where the responsibility
should be and anyone working on commercial projects knows that


I don’t think so, specially now with the Asset Browser, there will be a lot of community node groups for high level interaction for non-tech artists.

That said, that’s not a reason for this, TD’s should learn to arrange the node tree properly to share it, no need to limit the tool.


Surely, “future proof” is more about being Open and Flexible – IMO the real task is to make sure built-in attribute names are well defined with strong naming conventions, everything is consistent and tools are small with few side effects. Unix OS is like that and look how long that’s lasted – like 50 years – pretty future proof if you ask me. There’s a reason other tools like RM and H have adopted Unix like engineering conventions and have also lasted so long

1 Like

You know what hurts most about this post?
It feels that, what has been a discussion until now (admitedly, one-sided), now it’s not even a discussion anymore.
It felt until now that there’s a chance. Now I don’t feel like even trying :confused:

Keyword: “feel”; because deep down I still hope things will get better.
I still want Blender to be as procedurally strong (flexible) as it can be.


Indeed, very good points.

If the designers are so afraid about name clashes, why do their variables names are so obvious? why not using an unlikely prefix for example?

Note that removing the attributes nodes do not resolve the nameclashe issues, they can still occur on modifiers output, so this is not really related to the topic and is not a valid argument for removing the named attribute access within function scope.

I feel like they are trying to resolve old programming problems of variable shadowing & globals, re-inventing the wheel of already well-established concepts that simply got resolved with good practices guidelines and usage conventions.

About this:

One of the argument of the community is the following:

Is the serpent is already eating it’s own tail?
If there’s attribute input within object/collection info nodes, then the redesign is already breaking its own rules.

As hans explained, an attribute input node supported in modifier interface is even more share-able than attribute sockets

In my opinion, the design of @HooglyBoogly is a very good compromise; by informing users about globals being read/written in the interface, all possible share-ability issues are clearly presented to the users.

I did extrapolate the design a bit , here is my two cents:


Datablock selectors break encapsulation, and that is a clear shareability problem, but I find it weird that the conclusion drawn is that therefore doing it with attributes is ok.

In my opinion this just highlights that in fact, just like attributes, datablock selectors should also be part of the interface and be overridable on the modifier level.

This is not an argument against named attributes, if they clearly appear on the interface, like your example shows, they are definitely more help than trouble. But I think the way datablock selectors work right now is broken and should not be the motivation to handle attributes in a similar fashion.


I had to quote this because as powerful as the 2.93 Geometry Nodes was, there was something clearly different on how the node tree looked compared to what we are all used to in Cycles and in the compositor. With a few exceptions the trees were extremely linear because the nodes that actually worked with attributes often just had one output, destined for the next node that worked with attributes (so in some cases it felt like a reskin of the modifier stack with some options for branching).

The fields design, even though it currently lacks the two nodes that would solve the core issue here, feels more like building a proper node tree, with most nodes having multiple inputs and outputs and many branches going at once. The purpose of fields is to ensure you won’t have to define attributes at every step (which could mean a lot of attributes named ‘a’, ‘b’, and so on), and for the purpose of creating a ‘Cycles’ for geometry it is working really well.


Hello everyone. As a programmer myself (blender is a hobby for me) and against this point of view, I would like expose my thoughts for anyone to argue back, because since I saw the current fields dessign, I’ve always thought it was very good.

We programmers always try to simplify things for other programmers, because trying to understand code made by other people is really difficult and time consuming.
Also, for non-programmers, we try even harder and set everything ready with the lowest probability of failure, as any missunderstanding may inccur in more work for us :sweat_smile:

I understand people, specially those with programming knowledge see this system as a way to “program geometry”. But the thing is, we computer scientists hardly ever do program everything ourselves. We make use of already made “APIs”, as they are less error-prone and easier to understand, even for less expert users.

This is the way I see current fields dessign, as an API. You don’t have to keep track of everything, because the system does for you. That is the importance of Fields, the fact that, as in shading, everything is ready to work, even if you connect two sockets with different type, the nodegroup has a well defined fallback that will fix it for you.
Try yourself, if you can do any programming, to program anything without variables*. It is actually possible! :grinning_face_with_smiling_eyes: Just really inconvenient.
(* other than arguments in function calls)

This GN Fields were to work, people could use Blender GN with no programming language, which is probably most of the artists (even if this chat may not be a good sample), because anyone who has learned to program or has teached someone will remember the first wall you hit is actually understanding (associating) variables to values or objects. It is not that straightforward to understand that a name represents a thing that may vary in time.

In fact, I can’t think of a case that can’t be done with the current system, given the correct feature requests, and enough time for the developers.

Actually, if you really want variables, there is the python scripting possibility. Why does nobody use it? In my opinion, because it is harder, and more error-prone, just the inverse of using APIs.

So I think this Fields work actually does reduce errors related to different data types, while being easier to read and to follow (rather than keep reading var names), and easier for any new user to step in.
Feel free to disagree with me and reply :grinning_face_with_smiling_eyes:


That’s true, the field design itself works quite well. The different feeling between 2.93 geometry nodes and current version actually comes from the scale you look into the node graph. Just like a computer program, when you focus on the overall procedural, it looks quite linear; and when you focus on a specified calling stack it looks more like a tree, they are different aspects that compose a whole program. Software like Houdini treats geometry centric workflow(SOP) and data centric workflow(VOP) as two different things, and geometry nodes here is a combination of the two different workflows, that’s why something like field is necessary.

Hello Saisua

I think there’s an important distinction that needs to be done between an “Artists”, and a “Technical Artists:grinning_face_with_smiling_eyes:

This is the way I see current fields dessign, as an API.

In my humble opinion you are wrong here, it is more of an visual programming langage represented in nodes; you can pack your nodes into a re-usable functions, have arguments exposed, there’s even conditional math to switch calculation from conditions, and for/for_each loop nodes are in the todo

an API in this case would be the equivalent of loading a bunch of new functions, aka nodegroups in your toolbox.

1 Like

In this whole debate, I don’t think anybody has stated against Fields. It’s good from the beginning and will be for the future of the project.
The problem is with flexibility, which is the result of removing Named Attributes (or Get/Set Field equivalent).

I keep having an issue with this idea (in bold). What you say here is true, but JUST before Fields were commited to mainline, I could develop my own mini-solver, and not have to wait for a developer to create the specific node combo that I need.
Keep in mind a recurrent issue: priorities change!

(just as an example)
Today the priority is GN. What if tomorrow is Eevee?
Those that are excited about Eevee will be ecstatic. GN all of a sudden will have to wait.
Would you want in that case to wait 6 months for a node?