Geometry Nodes Gizmos Feedback

Thanks for posting this here. As you noticed already, custom gizmo shapes is out of scope for the first release for reasons mentioned in the original post. We don’t know for sure if we’ll have custom gizmo shapes in the future or if we’ll just add a couple more built-in gizmos that can be combined in new ways but still all feel familiar.

Your proposal looks interesting, but I think it misses an important point: How does Blender know in general how mouse clicks and movements in the viewport result in changes to the gizmo? Currently, you show how the gizmo geometry is generated from values that the gizmo controls, but not how to reconstruct the original values when the gizmo is changed in the viewport.

This can be detected automatically in simple cases like just controlling the radius of a circle, but for that you could also just use a built-in gizmo. Would be good to have a more compelling example for a custom gizmo that is not as easy to built out of existing gizmos shapes.

I didn’t think about it before, but I just noticed that this interaction design with the gizmo may also be related to modal node tools, where the node group handles the event processing (while we have a rough design for that, it’s still WIP).

Would be very useful if you can try to reliably reproduce the issue so that I can fix it.

Yes, that’s currently intentional. It often makes interaction with the gizmo easier and more predictable. Especially if changes in the gizmo also affect the curve the gizmo should follow.
We did talk about this kind of explicit curve-following already, and think it’s generally a good idea, but a bit out of scope for now.

In the context of this thread, a gizmo is something the use interacts with. I think it’s reasonable to be able to add what you say at some point, but it’s out of scope for now. This feature shouldn’t really be related to how the modifier API works though.

Yeah, as you found already, this solution isn’t very scaleable, because there is not a just a single geometry chain. Also, the gizmo wouldn’t even know at which point in the geometry chain it should start to take transforms into account.

You should not, and you can’t without going into the node group. The first post describes when gizmos are active/visible in the viewport. You can only see gizmos that control Value nodes when the node group that contains that node is opened in some node editor. Someone just using the node group without entering it, will never see the gizmo. I think this perfectly aligns with the existing concept that you can only edit values from the outside that are exposed as group inputs.

While I think it’s sometimes possible to not mix the gizmo stuff with the rest of the node group, it doesn’t really work in general, unless you want to duplicate potentially large parts of the node group. To me, this seems to make things more complex and less flexible.

Totally agree, the design intended something else. See my reponse above.

Not sure, which “nature” you mean here. To me it feels fairly natural by now that gizmos that affect a geometry should be transformed together with that geometry, and for that they have to be linked in some way.

5 Likes

While it is true that the current design is the only one that properly addresses all the requirements put on the gizmos, I am still quite worried that introducing the concept of reverse flow will open bottomless can of worms that will be causing problems for eternity.

If this is only solution, then so be it, but I would triple, even quadruple check that is really the case before it goes into master and gets called production ready, so we don’t end up in a situation similar to what happened with attribute based GN design.

I would much rather have a bit more limited gizmos, that don’t introduce and fragile new concepts, than having gizmos that can do everything (be driven with drivers, keyframes, etc), but turn node networks into unreadable jungle. I mean if we look at native gizmos in object/modeling operations, they can’t really be controlled in these advanced ways either, yet it doesn’t limit them much.

1 Like

Well, that’s what we are doing here. Never can be 100% certain of course.

Not sure what you mean, you can create keyframes on e.g. the object location which you can also modify with keyframes and/or drivers. It’s the same thing.

It’s difficult to find the right set of limitations that simplify the design without making it unusable for things that we want it to be used for. These kinds of designs for geometry nodes have to work at multiple abstraction levels at the same time for different kinds of users. Making something more convenient for some users often makes it harder for others who work on a different level.

1 Like

I think the inverted design is solid.

But I completely agree with you about readability!

What worries me most is the “Grape Bunch” workflow, I don’t think a Blender user should be told how to tidy his room. But I’d like the geometry’s “Zones” workflow to be more user-friendly.

I’d prefer gizmo to be less flexible and more limited. Than unreadable

2 Likes

The dial gizmo snaps to zero on full rotations:

5 Likes

I really haven’t found a satisfactory answer to this question… So it’s pointless answering the previous one.

However, it seems to me that associating a geometry with a gizmo could be relevant.
For example, to make Sliders, Bends or UIs.
Geometry wouldn’t be interactive, just aesthetic.

I don’t yet know how to integrate it, perhaps with a geometry input at the gizmo.

2 Likes

It happens randomly when I tweak a gizmo or when I change the node tree. I can do the same step several times without issue. But sometimes, suddenly blender freezes.

Try one this one …

Grap some gizmos, tweak them randomly. On my machine, Blender suddenly freezes sometimes.

Something’s bothering me about the transform: why allow multiple connections?

If multiple connections are allowed, why not duplicate the gizmo to each join geometry. This is not a “dynamic” duplication?

Why does a geometry transform only work if it’s linked to a geometry?

We could edit the Gizmo transformation without using the input parameters of Gizmo.

Why is the transform output called transform and not gizmo?

You could even imagine a new “set gizmo color/theme” node with a geometry input, to avoid setting up the color in the arrow gizmo. Or “Set Scale mod (World/Screen)”

The transform is behaving very strangely…

It just needs to be linked to the output.

You got an interesting point. After a gizmo got plugged to a geometry flow, one could do some ambiguous mess with it …

Now, what is the correct transformation for the gizmo?

That’s a valid point. It’s essentially the same topic that’s already mentioned in the original post for instances. Currently, the gizmos only show for the first instance. In theory we could use different heuristics, but it’s likely something that needs to be user-configurable at some point. We don’t really always want to show the gizmos multiple times I think, as this could overload the viewport with potentially redundant information.

I haven’t implemented it explicitly for the Join node yet, but I’d assume that the earlier inputs currently have precedence over the later inputs.

Also see the last point of the gizmo section of our workshop meeting notes.

We generally want to retrieve the gizmos transforms from the geometry that’s shown in the viewport. I still have to support looking up the transforms correctly when the Viewer node is used.
If the transform output is not used, it’s also not part of a geometry that’s shown in the viewport, so it’s ignored.

It was called Gizmo in my original implementation, but we changed that at some point because the gizmo can exist without this output being used, and it’s technically only the “Gizmo Transform”, even on the code level. We skipped the “Gizmo” part of the name, because it’s redundant of the node name.

That could be implemented with the current design in theory, but it doesn’t seem to solve a problem so far.

1 Like

As mentioned, I believe that it’s really use-case dependent whether the gizmos can and should be build separately from the rest of the node tree. If the kinds of gizmos you build require the “Grape Bunch Workflow”, then it would be very annoying to have to duplicate nodes. Portal links might help eventually to separate gizmos from the rest of the tree a bit more visually.

Indeed, will have to fix that.

Could be done, but generally it would be good if the gizmo can be placed somewhere on the geometry where the association is clear without extra geometry.
Gizmos that just act as a new kind of slider that shows up in the viewport but has no spatial correspondence to the geometry are out of scope currently.

I couldn’t reproduce the issue with this file yet unfortunately. Maybe it’s fixed already in a new build, don’t know. If not, please keep an eye open and try to remember more exactly what you did before it crashed.

1 Like

I used PR112677.29540a2f8d8a. There seems to be a pattern. If no node in the GN tree is selected, I can tweak gizmos. But the moment I select a node, freeze happens.

I try PR112677.9d7cd68b5533 now. I can tweak gizmos and select nodes, no freeze happened so far.

Thank you Jacques for taking the time to reply

Now bug and crash report!
The gizmo disappears if I duplicate the geometry node:


I crash if I use CTRL + Z

Next
I wanted to make a dynamic constraint system.
For this I used a Maximum, I wanted the gizmo to block when it equals the inner radius.

Do you think this can be implemented?

Then I tried to modify 3 inputs with a Gizmo that have different constraint values.

The gizmo locks at the minimum value, which I find counter-intuitive.
Shouldn’t he behave this way?


Edit: I realize that this is a general problem.

1 Like

I have other questions, I understand why transforms don’t work after gizmo if they’re not linked to geometry.

But why aren’t they automatically connected to the geometry at the end of the node graph when it’s not linked?

In what context is it irrelevant to link the gizmo to the end of the geometry?
Better to do it all the time by default?

If it’s imperative to be able to choose, why not set a parameter in the group node, “ignore unlinked transforms”?

Edit: even better, add an option. Relative / World in gizmos. It is Relative by default.
Relative: the gizmo is connected to the end of the node graph.
Absolute: the gizmo is not connected to the geometry.

If you connect the transform to the geometry, the parameter disappears. It is necessarily in “relative”.

So even people who create assets wouldn’t have to worry about it.
That would be a plus for readability, I think.

Secondly, I think the space after the gizmo should always be considered, even if it’s not connected.
It’s sometimes simpler to use a series of transforms to modify a position.

(If dupplications are supported)


To achieve this configuration, I think it’s more flexible to do it this way

(Currently)

It seems important to me that the space after the transform should always be considered.

I’m curious if there are any user personas created for Gizmos.

Having a Gizmo tell its function by type, placement, and direction demonstrates a desire for simplicity. This level of simplicity could make some Blender use cases easy(er) for non-Blender users to step into (e.g. non-Blender user creating small 3D world for VR or to be turned into TTRPG maps.) I doubt my example is the current goal. User personas would help guide my feedback.

What does that mean ? user personas ?

I hope it does not mean that you like to gather some statistics about what users do with a tool?

No, it looks he just means different user skill level profiles, which set the interface and tools up for either basic or advanced abilities.

Not within the scope of this thread.

2 Likes

That’s correct. I assume the use case I keep thinking of (users with a limited experience within Blender using it for layout and blocking and limited customization) is not the current goal. If I understand the current goal I will be less likely to distract the conversation.

Are there official targeted personas, goals or parameters we are trying to work within?

@Hadriscus

User Personas are a technique for business process design, general UI/UX design, as well as marketing. It helps to ensure the creators are focused on the end goal of how and where a product will be used.

A simple example would be creating a tool for executives to visualize time series data. You would want to remove as many windows, menus, and overlays as possible and make the available actions appear obvious and intuitive. Taking that idea further, if I was using Blender as the base for that tool, I would disable most shortcut keys, while also providing a way for the executive to incrementally look behind the curtain to see the power that resides just below the surface.

Reference

Persona (user experience) - Wikipedia
I sometimes shy away from sharing links to Wikipedia as people can take it wrong, but this is a good primer

@LeonardSiebeneicher

Definitely not. It is about making sure everyone one developing an item is on the same page, working towards solving the same problem. While Microsoft and Google - as well as any application that asks for “anonymized usage data” - are clearly doing this, I personally dislike that approach. User Personas can be prepared without all of the data collection.

2 Likes