Snapping & precision modeling improvements

For the tools and gizmos, there can be an option in the tool options to enable the tool with a base point. So that could alleviate having to do finger yoga. Or perhaps, instead of holding ALT or similar, maybe pressing it once could toggle base point mode. Essentially like “B” at the moment. Maybe B is starting to make sense to me…

I see what you mean. I tried it out and works as expected. Yet is not the most intuitive process, especially for newer users:

  1. go into edit mode and select a vertex
  2. Shift + S and set 3d cursor to selection
  3. go to object mode and set pivot point to cursor
  4. select object(s) and press R
  5. Press B to enable base point to align

This UX could be improved in a few ways:

  • when enabling Base point with rotate, the most common is the two point pivot, so this option could ask for pivot point and then for target point. However, I do get that setting a pivot point could be separate from the transform. Yet, in most CAD programs, when rotation, you are typically asked to set the base pivot point for rotation
  • this is a bit different and its own topic, but I wish the 3d cursor could be snapped in a similar way to other geometry, especially to vertices. That would make the process of setting Pivot much easier
  • for gizmo transforms (especially rotate, but applicable to all), first holding a button like ALT to set the pivot and then perform the transformation

There should be an improvement to the transform gizmos that allows the user to simply click-and-drag the basepoint.

For the tools and gizmos

It could be possible for tools (“autobase mode” checkbox) but it is not possible for gizmos. Use ng gizmos assumes “gizmo-driven axis restriction” which assumes a pressholding to set the axis restriction first.

but I wish the 3d cursor could be snapped in a similar way to other geometry, especially to vertices

It already does it. 3dcursor was enhanced, and dragging it has snapping behaviour.

1 Like

I’ve missed that. Thanks for pointing it out. Testing, I also see that it works in 3.5.

Good, so it’s actually one less step in the process to adjust pivot point when rotating with a base point. Still not the most intuitive for a beginner, but better than having to go into edit mode and shit+s.

Minor followup on the finger contortions; I spent some more time with it over the weekend, and found this procedure works fine with no contortions required.

Assume 2 cubes in scene.

  1. Select one cube.
  2. Without delay or mouse action, on the keyboard hit G B. (One right after the other).
  3. You are NOW in “source pick mode”. Move your mouse over the selected cube (hovering), and click something on the cube you have selected. (Example: a corner point).
  4. After clicking the source point, hover your mouse over second cube (the first cube will travel along for the ride). The destination point (say, another corner point) will highlight, click to set the desired destination.
  5. Click the mouse, to complete the operation.
1 Like

Thanks for the feedback!

The last proposed change was the implementation of presets and filters per element snap (intended to help with retopology).

While I’ve seen some positive feedback, I’ve noticed that there’s still a lot of question as to whether it’s worth the complexity, or whether it’s better to save “presets” per workspace rather than a menu of options.

Due to these issues, (and as retopology is not currently a focus), this change was postponed.

Now the focus is on smaller changes like new defaults for Blender 4.0.

To discuss this specifically I created a new (temporary) thread:
Snapping & precision modeling improvements: New defaults, snap icons and removals

Feel free to comment there :slight_smile:

Continuing from the other thread, as I was requested to:

This feels like stonewalling. I believe the user would prefer to ability to snap something to the center of a face, rather than endure the minor effort of reading another option in the dropdown menu.

#78434 includes within the proposal, adding the ability to snap metaballs. I don’t see how that feels more essential and basic.

Here’s a quick “result” image of why one might wish to snap an object to the center of a face:

That would be very many A key clicks, to select the entire edge of the cylinder…

I believe the attached image addresses this.

Note that I am not suggesting that the normals be displayed during the snap operation. I’m simply pointing out that their actual postion and “up-vector” is where the user would expect the cube to snap to. The fact that the top of the cylinder is an n-gon (made of discreet triangles, fine - not relevant) doesn’t create a situation of a user wondering “why did it snap there”?

Additionally, I cannot determine how to snap one entire object to another object, using the origins as the snapping points. I accept that I may not be clicking the correct combo options in the menu. However, if this is indeed not currently possible with the new feature set, I trust that the use case is obvious.

5 Likes

Such an action depends from a specific conditions that influence specific method realization, I guess.

I believe that snapping to a 3d cursor as to a vertex (being able to use it as snap source or snap target), including its orientation could be useful for different operations, for example, for self-snapping.

Self-snapping is a separate iceberg though.
An example - putting a cube from its bottom to its top.
There was proposed ghosting (like in most CAD programs), but it could be performance expensive when you work with heavyweight selections (like precisely placing houses that consists from up to 50k objects). Ghosting can be optional, but it is also challenging to make I guess.
At the moment self-snapping can be solved with building additional geometry (temporary constructions aka helpers), like drawing a mesh edge from snap source to snap target, and removing it after operation (or keeping it for multiple similar operations, which is typical for working with instances for example), so it is a question of a QOL-feature rather than critical missing functionality, so it can be postponed.

If you have to do workarounds like that, it might as well be missing functionality. Maybe not critical functionality, but functionality nonetheless. I definitely would not just call that a QOL issue.

3 Likes

Sure, but the point is that in some cases you have to build them anyway, even if it will be solved in the other way. (for example, moving a window in a wall at the desired distance and direction, where opening in a wall is a unique mesh and window object is an instance, so they cannot be edited with a single multiedit operation)

I mean - this probably borders on a feature request but I still think it’s important for precision modeling:
One feature I have not seen in any regular 3D modeling app except one is simply the functionality to input dimensions and position of a selection in edit mode (relative or in world space).
Sure - we can edit the precise scale, position, dimensions and rotation of an object based upon it’s pivot root. But it is so damn useful to also have that same functionality for an arbitrary selection in edit mode. Rotation of course will not really work well since there is no real base for it but for position and scale alone.

It eliminates many actions with rulers, temp geometry for scaling or placement and scale calculations pretty much instantly. Granted - with the new point snap the positioning will be far less of a problem any more but still - picture this:
You know how big a selected part of a model should be or how big you need it. Simply select the geometry and the Item transform tells you not only the median position but also scale of the selection. Goes both ways of cheking and adjusting really: You can immediately see the dimensions of a selection and rescale them precisely by median input if needed. No rulers or scaled temp-geo to snap to.

If that goes off the rails too much please ignore this. Since this is also a thread for precision modeling I found it very fitting here. Especially since from my experience somehow this seems to be a feature that never crosses the minds of many people working with 3D modelers, even for many years. :astonished:

2 Likes
1 Like

Ah nice. My proposal is two years old and got only 9 upvotes :sweat_smile:
And I just saw there is yet another from 6months ago with seven upvotes. We should funnel all individual upvotes into the most active proposal. Does RCS have a way to report duplicates and request merges? @pablovazquez

2 Likes

Blender can let you set edge length (with mesh tools addon).


But that is it. Yes it would amazing if I can just edit dimensions of selections in edit mode.

1 Like

Right? There’s already a transform panel, conveniently letting you type in new XYZ info… unless you’re in edit mode. It’s such an omission it would feel like an oversight, but I was honestly shocked at the pushback of “why would you ever want to know the distance of a selection, or scale a vertex selection to a specific distance?! Use the ruler!”

Like, really?

(It’s awesome that blender has a ruler tool. Number of times I ever used a ruler tool in any other 3D software: I think probably never. Because I didn’t have to use a ruler tool.)

2 Likes

yeah I think that is just blender culture. I was kind of the same until a met someone who wanted to migrate from c4d to blender and showed me how cool it is.

1 Like

Besides C4D and Blender I used two other major programs for at least one year (most +3years at some point) as my main programs in daily use. I asked the question on other program forums as well yet to me it simply seems that this …
a) … never even crossed the minds of most people working with the program
b) … never crossed the minds of the devs either and they can’t see this being useful, unexplicably.

Mostly only people coming from C4D eever immediately agreed with me. XD

The only way I can explain it for myself is that you get so used to your daily workflow in (Program X) that you just accept every quirk and workaround as at least okay because you know your way around it. I can safely say from asking for this on more than one forum (where the devs also read) - it’s not just a Blender thing. It seems like it simply is one of those unexplicably overlooked simple yet crazy powerful features nobody notices for some reason. :man_shrugging:

What’s weird to me is that unlike other feature requests this seems to be also rather unintrusive - it doesn’t change anyones workflow who doesn’t need it. And also seems like rather low hangig fruit in terms of complexity.

It really only has to have two settings:
Global scale (the scale in raw scene values), Relative Scale (the scale relative to the object’s main scale)

Bonus for adjusting and showing the boundingbox rotation of the current selection based upon the gizmo setting for local or global transform. With that it would even already surpass C4D and be a Blender thing. The most important thing about practical usage is just: The transform values have to update and be editable in realtime like for any other transform Vector3 field.

1 Like

A “user” term represents “a person who is got used for something”. A piano user is a person who is got used to playing piano, a guitar user is a user who is got used to guitar.

If there is absence of a some kind a functionality in some software, its user’s skills naturally grows to avoid that.
For example, Blender never had a basepoint snapping, so entire movies was made without it.

Any software that exists is a collection of a solutions and decisions which was made by some people at the time, some of them are successfull, some of them has critical flaws. For example, industry standards is something like Windows XP for hackers - a very well known platform which is easy to hack and outperform. This is why workflow designers, who analyse software for a wide range of a workflows, exists.

At the moment there are no wide range workflow designers to consult developers in Blender crew, and devs are consulted mostly by specialists who performs a limited workflows, like animators.

@mano-wii is it possible to add numeric input in the direction of a basepoint snapping?
For example, you start grab, set basepoint source, hover over target, then type “3 enter”, and it translates the selection at 3 meters in the direction defined by source-target vector? (In case if no other axis restriction added during operation)

Same thing could be possible for rotation for example - a source-target vector could represent a custom axis of a rotation.

1 Like

I feel like this is a task someone other than a core dev could very well try to tackle, if they knew about it.