Modeling tools = can has moar luv?

Hi All,

Like many i fell in love with 2.8 and i even stopped my max maintenance for it.
For now i am still using max for modeling as the leap is still a bit far.
I am wondering if modeling tools will get more attention in future versions of Blender ?

What i really mis sofar is the ability to change the parameters of a primitive after creating it.
Now i lose this posibility right after making it because i never ever know how many segments i want untill a bit later or sometimes even at the end of a project (where in max i can still go down the stack and make changes if i didn’t mess with the amount of vertices higher up).

I’m not saying i can’t work without a modeling stack but it would be nice to at least have the parameters available untill later, that would save me some frustration.

Next to that it would be really nice to see the modeling toolkit evolve into the best one there is so i don’t have to go into another app just to make a model.
I’ve seen some awesome addons being made for 2.80 so that is promising also.

Any heads up on modeling dev plans ?


Parametric objects is too obvious a missing feature to not be added at some point. Node-based modifiers should allow for this.


The only modeling tool that blender may be missing from max that i remember its constraint to face and “edge”, blender has an edge slide that in some situations its better than max’s constraint but its not the same, and some times the behaviour of that in max its more useful, besides that (and edit poly modifier that i had a meh relation with), what other important modeling tool its missing in blender?

now i think about it there are some things with the selecctions that are really nice in max. . . and a few more minor things but can’t remember anithing that justify going back to max to model. . . even more i jump from max to blender precisely because i modeled faster and smoother in blender, this was around 2009 to 2012 maybe after that max added something extreamly good that i don’t know about.

Indeed, i see the awesome potential of the modeling tools in Blender. Just wondering if anything is on the agenda as i see a lot of improvements in all the other fields.

This is kind of sidetracking the discussion at hand but since nodal workflow is apparently considered to become a very integral part of Blender in the future has some though gone into better Dual-/Mulit-Monitor support already?
I mean it is possible and I do use a second window on my second monitor at work but refocusing with doubleclick for example is always a weird annoyance.

With a node editor becoming a more prominent piece in the UI it would make sense to ease use with having the setup open on a second screen. After all nodes can deman a lot of screen realestate fast. So does the 3D view.

1 Like

Refocusing should not happen with double-click. This is not how Blender works.

There are some OSs that let you easily enable ‘sloppy focus’, which automatically focuses the window to where the mouse is.

Nodes: Making things node-based does not necessarily mean using the node editor. For editing primitives, you could still do this inside the Properties, even if it’s evaluated through a node tree.

For more, see

1 Like

Huh. INteresting. This is weird, though. I just tried it and indeed Blender does recognise a mouse hover even without the window being in focus first. It cannot actively shift the view in that window without a second click, though.

I just tried it in Maya and in Modo to see if I was just too used to the other ones but indeed they do behave differently in two ways:

  1. When Blender has a second window open you need to refocus both windows separately - i.e. bring them to the fron separate from each other. Both are shown as separate Icons in the taskbar as well. Maya and Modo show just one icon in the taskbar and with that bring both windows back to the front when the program is focused.
  2. In Blender I can immediately drag a selection rectangle in a second window but I cannot scroll before actively bringing the window into focus at first. Meaning: only after the first released mouse click will the window be in focus to make scroll, pan and zoom available inside the viewports.
    Maya as well as Modo let me rotate/pan immediately no matter if the main or secondary window was used beforehand.

Sloppy Focus is an interesting thing I didn’t know about but the behaviour doesn’t seem to be desired across all applications in the OS.

Has this something to do with Blender’s UI engine or is that designed behaviour?

(edit) I just noticed that Modo lets me hover over a viewport and highlight faces therein even when another program like Substance Painter is open and in focus. So I assume this might have to do with Blender’s multi plattform UI …?

1 Like

There is actually a Google Summer of Code about Bevel modifier.
During one discussion about this tool, a developer talked about the idea of edge groups.

Discussion about retopology and sculpting are very active. Those fields can be considered as included into modeling field. Tools made for them can be complementary with other modeling tools.

You can find improvements of snapping tools and automerging and polybuild tool described into Tools & Gizmo page of 2.81 release notes.

2.8 betas brought a lot of feedback that was added to 2.8 design original ideas.
Developers have a long list of todos to improve current active tools.

I am following blender development since more than a decade and I don’t remember a blender release that did not have an improvement into modeling field.

Thanks folks, i had no idea there are plans for nodal modeling, that sounds very promising. Thank you.

1 Like

Current stack could too, if adding an object would simply spawn a primitive generator, basically an empty mesh object with a modifier that offers a drop-down menu for selecting a primitive type, followed by the parameters for the selected primitives generation.

edit: better yet, Empty objects should be the base object type, and so long as there is no data-block connected to, it would just act as an empty object does right now, simply offering a viewport display type as they do now and at this point the only modifier that could be added to it would be the generator.

That would offer a top level drop-down - a data type drop-down(mesh, curve, etc.) and after that it could offer a primitive choice for data-types where applicable(mesh[cube, cylinder, …], curve[bezier, path, …], lamp[point, spot, …], etc.), followed by appropriate parameters.

For empties with no modifiers or data-blocks attached, from the data-block selection field you could have access to all the object data-block, the object would become a instance of a mesh object if a mesh data-block was selected from it, or a curve object if a curve data-block is selected, etc.

For mesh generators at that point it would be useful to have one additional primitive type added: a single vertex with nothing but a location(translation) parameter adjusting it’s location in the objects local space.

Sometimes all you need for parametric modeling is that single vertex point to start from, I have modeled a plenty of barrels in blender that way, with nothing but modifiers after having merged all the vertices of a cube and moving the point a bit to define the radius of my barrel. :stuck_out_tongue:

Uggh, I wrote too long of a post. Should just go to sleep already.

You stated that you need to double-click to focus. This is not the case - a single click is enough.

Sure. I did corrected myself in the second post, didn’t I?

My point still is: Blender needs an extra click to refocus the window. Which sounds like not much but in practice can get really annoying. You click over in the second window and drag what you expect rotates the second viewport, But it doesn’t - depending on what button combination you pressed when refocusing you might get no reaction at all, you might drag a selection box or you might move an object.

Which is why I asked: I don’t know if the scope of a potential fix is a papercut or if this runs deeper.

I think this discussion should maybe continue in the Papercuts feedback thread, though. Maybe I am doing something wrong or maybe others have similar experiences. It’s derailing the topic at hand, though. So - sorry for that.

1 Like

It could have been added to the stack too, but it never was added. And since you would not be able to drive those options with other nodes, it could just as well be options set in the mesh data block.

Nodes changes this, because then it becomes useful to be able to drive the parameters with other nodes.

1 Like

Yes, of course. And drivers can drive them, too. A node is a node, a graph is a graph, the two aren’t necessarily the same thing.

If it were up to me, I think we should have added this many years ago - parametric objects are such a basic missing thing. But now that node-based modifiers are coming, it probably makes more sense to tackle it then.


Yeah, probably.

With the plan to move OpenSubdiv from the modifier stack to the mesh datablock, it’d make sense to add subdivision and crease control to the Create Primitive operators. If you threw in a ‘Multiply Creases’ operator that works in Object mode and an “apply scale after transforming” option for the Scale Cage tool, you’d have a basic system for parametric primitives.

1 Like