Developer Forum

Fields and Anonymous Attributes [Proposal]

I’m interested in understanding better exactly what the trade-offs are between fields and attributes. I’ve tried to read this whole thread, but I haven’t seen developers mentioning the repercussions of fields. I don’t think all of those repercussions were immediately apparent to everyone commenting on the fields proposal. I think coders can often see the long-term impacts of decisions more easily than users can, because every time there’s any conceivable ambiguity, they get smacked with it in the face and have to make a decision about how to handle it.

I’m still in the “who moved my cheese” phase of dealing with fields, unfortunately. It seems to me that one of the issues with fields is the ambiguity: when we get position, when exactly are we getting position? With attributes, that was clear: we say what position we’re operating on by our noodles. To deal with that ambiguity in 3.0, the answer is, “we get the position immediately before we start evaluating the GN modifier.” While I can’t put my finger on it, I have this intuition that we may have some trouble with turning attributes, mid-stream, into other attributes; that resolving the ambiguity in this way limits what we can potentially do with a single GN modifier. But this is only an intuition, something that feels risky. And even if it is risky, there are potential workarounds, even if they’d be clunky.

It seems to me that the main advantage to the fields implementation is that we no longer have the MixRGB/Attribute MixRGB kind of duality; unfortunately, the 3.0 implementation makes it look like that node savings is more than made up for elsewhere. And it seems to me that a fields implementation is not actually necessary to solve that problem, because it could be abstracted behind interface, with Blender silently making a decision about which to use on the basis of noodle connections.

There is no ambiguity of where the position comes from in 3.0. The quoted sentence above is actually wrong (there might have been a tutorial that got this wrong). Which position is used has nothing to do with the modifier. Instead the position comes from the geometry that the field is evaluated on. Maybe the image below makes this more clear. The red arrow shows what you said (if I understand correctly), while the green arrow shows where the position actually comes from.

11 Likes

Thanks, that makes a lot more sense and helps out.

It still seems to me that there are some ambiguities there. Consider this one:

Or this one, how many Blender users are going to get this right on the final exam without peeking?

Obviously, Blender decides to do something here. There is something that it does, and consistently so. But I’m not sure people are going to be able to predict from the nodes.

These are artificial examples, and there are other structures we could use to remove the ambiguity, but I think once users start putting a lot of nodes in a graph, they’re going to run into problems with their intuitions related to these kinds of things (and the fact that there are hundreds of nodes is going to make it much harder for them to figure out where the problem is.)

These examples are less obvious indeed, and require a bit more getting-used-to. I do think that many of these low level nodes will be wrapped by more powerful node groups that work in a more obvious way in the future. Still, there is no ambiguity here:

  • In the Transfer Attribute node, the Attribute input is evaluated on the Target geometry. The Source Position on the other hand is evaluated on the geometry passed to the Group Output.
  • In the second example, the first Vector Math node is actually evaluated twice. Once on the position of the geometry passed to the the first Set Position node and then again for the second Set Position node.
3 Likes

It helps to think about it this way: Fields describe a mathematical function, a formula. They are only evaluated on the input socket of the geometry they are attached to. Probably not how it works internally, but that’s how I think about it.

So the calculation (logically) doesn’t happen in the fields nodes, they just describe the calculation itself. The actual calculation happens in on the geometry, as described by the field nodes.

Again, that’s probably not how it internally works, just what helps me think about it.

1 Like

That is how it works internally. Also see this paragraph from the first post:

2 Likes

Is that true? From reading the docs, it sounds like the Source Position is relative to the Target.

If that’s not how it works, then the docs really need to be clarified.

The hints are in the names.

Target Geometry vs. Source Position.

I think it’s pretty clear.

The names Target and Source are very confusingly chosen for the attribute transfer node I think.

the Target of the attribute transfer is where the attribute data comes from, so it transfers an attribute from the target. The source position is the position of the vertices you are applying the attribute to. So the source points where the field is evaluated, and the target is the object to interpolate the attribute from.

Or maybe I have completely misunderstood. Which wouldn’;t be the first time. :wink:
Fields are a great system, but there’s work to do in the naming and documentation I think…

3 Likes

Yes, your explanation is correct.

What I’m wondering is how could the naming be improved? What better 2 words to use, to hint the Target Geometry / Source Position combo?

From Geometry & Current Position?

Idk… Seems off :confused:

Hello
3.0 is great,

When we’ll be able get or write ** attributes by names within** the nodetree again?

The new way to access attribute is not intuitive and very cumbersome, it is also causing us a lot of limitation in our studio pipelines

It’s confusing because the intial design, as quoted above, supported accessing our attribute like we are supposed to.
Last time i tried the field prototype, it was possible, now it’s not anymore and we are forced to pass every single data in modifiers even if it could cause huge inconvenience …

I get where the ‘target’ comes from. Proximity also uses ‘target’. Still, in combination with the words ‘transfer’ and ‘source postion’ it is quite confusing.

maybe just renaming ‘source position’ to ‘sample points’ or something like that would be enough…?

The word “target” IMHO is very inaccurate, I understand why was it choosen, but the target in general is “where I’m going to apply the result of this”, and in this case is more “from where am I getting the information to get a result”, so I would change that name to “Source” more than “target”.

Just my 2 cents on this thing :slight_smile:

7 Likes

I don’t mean that Blender handles these cases ambiguously. There is a grammar to GN, and given sufficient understanding of that grammar, you can parse these. What I mean is that, if you hand those pics to somebody with imperfect knowledge of GN and ask them to evaluate them, without having Blender to test them, you’re going to get more people answering, “I don’t know” than you would with 2.93 field-less equivalents.

I’m curious about the second one. It makes sense that, given what you’ve said, that the transfer attribute is part of a field (function) that is called by Group Output. But in that case, how can I specify that I want the transfer to act on pre-scaled geometry, without using multiple GNs? Anything I do to “close the brackets” on the function and link it back to the main geometry chain will do nothing to close the brackets for the noodle leading to the output. Any link will just mean that two different transfers are happening, with the output of one discarded. Perhaps this is just not possible until we have arbitrary get/sets so that I can realize the data transfer on the geo and then output something from the geo?

The source/target terminology for Transfer Attribute makes perfect sense to me, because it acts exactly like a Data Transfer modifier, where those same terms refer to the same entities. The only thing that strikes me as weird about the naming is that it’s not called Data Transfer to take advantage of users’ existing knowledge.

Ha! I’m new(ish) to blender. And only after pulling my hair out for an hour in figuring out the the fields Attribute transfer node I finally understood the Data Transfer modifier for the first time as well. I’d say it’s confusing as hell over there as well. :smiley: Though I get where it comes from, with all modifiers always using ‘target’ for a selectable object.

1 Like

The function is not called by the group output. It’s called when it is being connected to a data flow node.
image(3)
This, for example, the Position here Is an empty shell referring to something called position but it is actually not the position data, so just imagine it as an unknown variable x. so f(x) or f(position) = position + (1, 1, 1), and without the specific value to “substitute” the “x”, this function would not yield anything.


Now When it connected to a data flow node that has a green geometry socket it gets the Position data from there.

Of course the node tree eventually needs to be connected to the group output in order for the node tree to run but the function in particular is not called by the group output

In the transfer node’s case, I am not sure but I believe it works like this:


My understanding is that the transfer node has a green geometry socket so it already evaluate the attribute from that geometry, but it does not output a geometry so instead of outputting the attribute, it integrate the evaluated result into another function that seeks which geometry to transfer to.

Here’s another good example. This one is nice because it’s not artificial; it’s something people would actually want to do with geo nodes:

This is something that should probably have been built into Blender a long time ago; it lets you just make a new sub-bone, paint that bone unnormalized, and then steal weights only from its parent bone, so that you don’t affect any of your other weights. It’s possible to do with multiple vertex weight mix modifiers, but doing it like that is way too much work to set up, so instead, weight painters just don’t do this in Blender-- the complexity ceiling is too high. GN promises to simplify it, and indeed, it’s trivial to do in 2.93.6. So this is a very real-world problem.

Assign all verts at 1.0 to StealFrom and StealTo, then apply the geonodes. Expected: StealFrom 0, StealTo 1.0. Actual: StealFrom 0, StealTo 0.

Hmm, that’s interesting. What if we swap to StealFrom and StealTo noodles? I mean, there’s nothing that the noodles imply about evaluation order, but as somebody that’s poked a lot of things, that seems like a good place to start poking. Actual: StealFrom 1.0, StealTo 0.0. That might work if we rename our outputs, for StealTo 1.0, StealFrom 0.0!

Let’s try another test case, StealFrom 0.5, StealTo 1.0. Expected: StealFrom 0.0, StealTo 0.5. Actual: StealFrom 0.25, StealTo 0.5. Sigh, I guess it doesn’t work.

This is another case where personally, I just don’t see how this simple operation can be done without using two different geonodes modifiers. And like I said, this is as real-world as it gets.

@vasiln That’s actually just a bug. It would be nice if you could report it on developer.blender.org.
What happens is that first the StealTo output is computed, and then the StealFrom output. But when computing StealFrom, it uses the new StealTo value.
We already have some infrastructure to handle such cases correctly, but didn’t use it properly in the modifier code.

I’m not entirely sure what you mean, but it sounds very much like what the Capture Attribute node is supposed to do.

1 Like

Thanks, capture attribute indeed is what I’m looking for; I didn’t quite understand how to use it earlier. It can also be used as a workaround for the last problem I demonstrated. I made a bug report at ⚓ T93715 Geometry nodes output depends on order of outputs and @'d you there.

1 Like