Removing confusing "scale" mechanics from Blender

Based on the amount of confusion the mesh “scale” mechanics caused me personally and thousands of other people, I can safely say that it is the worst thing that happened to Blender. So why not get rid of it?

Let’s say we have a cube 1m x 1m x 1m.

When you increase the width of that cube to 2m the only thing that should happen is the cube should now have the following properties: 2m x 1m x 1m, which makes perfect sense, right? But instead of this Blender creates some weird abstract called “scale” that is monkey-wrenched to mesh data in some magical way that no one understands (according to the amount of confusion it raised on the internet).

I wanted to use blender for precise engineering, but I gave up after spending several hours figuring out why some of the meshes in my project had the wrong properties and other meshes didn’t.

I mean, explain this nonsense:

This image demonstrates why I think this “feature” should be removed from Blender. I added a ruler in the object mode and then another one in edit mode and I don’t know what to believe. This block should be about 0.015m in height but after manipulating different properties of the object it now shows different confusing numbers everywhere.

I don’t know why we have to do these magic manipulations, but I applied the scale for every object after manipulating it, and it helped in some cases but didn’t do anything in others. And how am I supposed to back trace it to the beginning and figure out how this abstraction works and what I need to do to fix this problem that shouldn’t exist in the first place?

When I change the size of a mesh I should expect Blender to just change the size of that mesh. So why is Blender creating a magical unicorn instead? And why do we all have to spend hours learning how to use that magical unicorn?

If I want to “scale” a 1m x 1m x 1m cube and make it twice as big in size, I should change its width, length, and height to 2m x 2m x 2m, and after that all the modifiers, textures, UVs, measuring tools(!), etc. should simply update and reflect that change, right? If I change mesh property length, Blender should change the property length of that mesh. It cannot be any easier.

I love blender, but this feature makes me angry both as a developer and as a Blender user.

Pros of removing “scale” mechanics from Blender:
Objects will have precise properties in any context, without any invisible magic layers on top of it.

  • It will simplify the codebase and increase the rate of development
  • It will remove an unnecessary confusing layer of abstraction and decrease the amount of bugs
  • It will save developers and digital artists millions of hours figuring out how this layer of abstraction works
  • Blender will become usable for precise engineering
1 Like

you know you can apply scale right after changing dimensions, right?

As a side note- I disgree entirely here. Blender will never be a viable option for ‘precise engineering’ because at the end of the day it’s still using floating point math and will never have the dimensional precision necessary for engineering purposes. It’s just the wrong tool for the job in this situation.

2 Likes

If it was that simple, there wouldn’t be thousands of questions on the internet related to “scale” directly or indirectly.

This “scale” abstraction mechanics should not exist. If I scale a mesh, Blender should just update the properties of that mesh directly, that’s it. If it’s doing something else, the algorithm is wrong. Object properties should always be reflected in a precise direct way.

  • Why should I have to remember to do this thousands of times after every single manipulation?
  • What happens if I forget to apply that manipulation right after and then do other manipulations, will that break everything?
  • If it breaks everything how can I fix it if I cannot undo to that particular step?
  • What if applying changes didn’t help and it still shows me what I highlighted on the image in the post?
  • How should I back trace what I did wrong and fix it? And why should I have to do it in the first place?
  • Do I need to do that when I add multiple meshes together?
  • Do I have to do that just for the scale or for location as well?
  • When should I apply only the location and not other properties?
  • When should I apply only the scale and not other properties?
  • Do I have to do that in Edit or Object mode?
  • What if I forget to do that and change the object in Edit mode and cannot undo to that step, can I apply the changes after, or not?
  • Do I have to do that for every manipulation or are there some particular situations I shouldn’t?
  • When should I not apply changes at all, what are the exceptions?

I don’t think the problem is as large as you’re making it out to be. This concept is not unique to Blender.

This fundamental concept also exists in the other DCCs like 3DS Max and Maya as well. Over there applying scale/rotate/translate is called “Reset XForm” for Max and “Freeze Transforms” for Maya. Their “edit mode” equivalent is either called “sub object mode” in the case of Max’s EditPoly modifier or Component mode for Maya. They behave similar.

Object scale affects the object’s dimensions and those are simply different than the dimensions of the actual mesh data.

One concrete example: Model a table. ALT-D duplicate that table so that it shares mesh data with the first; i.e. it has the exact same shape. Now scale, in object mode, that second table to make it larger. This MUST be possible to do without also scaling that first table. And it is possible because of the separation of scales.

What do you mean? If you want the forked object to reflect all the changes of the original, duplicate it with ALT+D, but if you want it to be an independent object duplicate it with SHIFT+D. What’s the problem?

If you want Blender to share memory for these 2 objects but still be able to change some of its properties, there should be a “linked fork” button or something which would tell Blender to create a linked object and create override variables for changed properties:

  • When you need it, you would click the “linked fork” button, which duplicates object table_1 into table_1_fork_1
  • Change that table_1_fork_1 dimensions from [2,1,1] to [3,1,1]
  • Blender would create an override variable for dimensions.x without allocating memory for table_1_fork_1. That’s it.
  • That forked object will have dimensions [3,1,1], without any “scale” magic.

I would love to know what kind of bizarre workflow you’re using that requires thousands of mesh-level scale operations from object mode. And where are these thousands of other people who are having trouble grasping how scale works? Literally every other DCC package works this way, Blender is not doing something crazy here. The most common scale related problem I’ve seen as it applies to Blender is when people try to bevel edges on a mesh without unified scale. If you want to talk about where Blender could improve the workflow for new users lets start there, but remove Scale altogether? That’s just crazy talk, why not remove translation and rotation while we’re stripping away basic 3d principles :stuck_out_tongue:

A lot of this confusion comes from not understanding Blender’s Object->ObjData structure.

If you are using Blender to do precision engineering, you have the wrong tool.

tl:dr: change your objects in Edit mode.

1 Like

I second this, and I’d like that we remove rotations as well, they’re too confusing

Hi Alex. What you are asking for and all these questions show that you are not yet familiar with the basics of 3d math or 3d tools in general. And your proposal to remove scale is not good for quite some reasons, but explaining all that will lead too far here. I don’t think you will get answers here to all you questions. And you should really search for a beginners book of your liking on a specific tool or the underlying math to ease your start with whatever 3d tool you will choose. The topic your are touching here is as the others stated also really basic and essential to all 3d programs. But to give you a start to get some understanding, each mesh has an origin and vertices are defined relative to that origin. The origin is normally at the center of a the meshes own “universe”. Thats the model space. That makes things easier to calculate later. Between these vertices we define edges or faces and thats what we call a mesh.

The other thing are matrices. Simply spoken, with matrices we can move rotate and scale these modelspace vertices in 3d space. A matrix can be seen as a group of location offset, rotation and scale, but it has also other nifty features. Matrices are an important concept to make the needed calculations to modify meshes in a 3d space fast, consistent and efficient to combine, thats needed if you are eg working with hierachies and if you are resuing meshes in multiple locations. It’s needed for all sorts of needed transformations, like the transforms into camera space and quite some different spaces too. You don’t see that, but to sticking to these principles is not just for editing your cube in size.

And just picking a single feature like scale and dropping that does not lead to functioning math for the rest. There are alternatives to matrices, but these are not easier to grasp.

It’s one thing if you want to see object dimensions in blenders edit mode as sort of comfort feature, what would solve your hassle, but asking for the removal of the scale factor of a matrix is really another and makes really no sense.

My coffee is empty. I hope you got enough info for yourself to get the ball rolling. Good luck.

2 Likes

Removing scale from Blender is not possible, we’d be unable to properly interoperate with other applications and file formats.

However, what is possible is making it so that scaling operations in object mode apply to vertex positions by default (or with a preference). This is done in some other applications, and makes some sense in my opinion, though it does come with its own set of issues.

It corresponds a bit more with how you think of objects in the real world. Moving or rotation something is not what you would consider modifying the shape of an object, while scaling would be considered deforming the object itself.

4 Likes

@brecht: Thinking about numerical stability, I’d not suggest to apply the scale to the vertices directly. But might be nice as an option,yes. What would make most sense to me here is to add the an extended numerical view/panel in edit mode to work on all vertices or selected parts. One that allows to modify vertex positions, rotation and scale and also allows for modifying values of the dimension relative to an axis aligned bounding box ( optionally shown in worldspace or modelspace).

A user would just have to switch to edit mode to leave the scale untouched.

I agree supporting dimension editing in edit mode makes sense.

I think experience shows that just telling users to switch to edit mode is not working that well. They can do that now, but regardless of the the reason why, it’s not happening in many cases.

4 Likes

Yes , that’s something I can believe, and as I said, if it’s just an option I also could live with an option for object mode. But it introduces alot of questions and problems. Like how do you proceed with instances. Do you propagate the change blindly. Do you correct their scale in the instance matrix to eliminate changes. What if you change it often that way and eg coplanar regions have their normals deviate from each other. I think writing it back to the vertices is better for parametric objects, what blender doesn’t support.

These drawbacks are also part of doing the same in edit mode , but its rather expected here. A visual editor for numerical values is to my view something people are able to understand and it would be a comfortable and handy tool in edit mode. But I might be wrong with how well this would be understood.

I’m not sure if I’m making a totally unrelated point, or making or breaking yours but as a total noob I found myself in this situation yesterday. With a cube, and a cup rim with a solidify modifier which were both telling me they were 3cm wide/thick. Looks like the issue was that I just started scaling all pell-mell after addition of my cylinder/cup, but honestly I feel like that’s something I should be able to do because scale in a scene should be universal. Especially in a scene where you’ve already established scale with other objects (the donut). It’s odd to me that you could have a scene with a bottle of ketchup and the Empire State Building that Blender thinks are total real world scale, but they look the same height in the viewscreen.

Feel free to say “beat it noob, this is the dev forum!” btw…I can hang.

Once you apply the scale using Control + A in Object Mode, you hardwire the scale into the object. This means that the values in modifiers are only true once the Scale values in the N-key panel are 1.0.

I can recommend this free add-on, which auto-adjusts modifier values along with the applied scale of objects (something that would be very welcome as a native Blender function):

Really what I see in forums is that related to this issue, users realize that something is wrong when they find a malfunction with Hair, simulations or modifiers. Not that they are instantly confused by values of transforms in N panel (sometimes they don’t even see Transform values)
I remember that when Luca Rood was developing the Surface Deform modifier, the users made him notice that they had to apply scale for it to work correctly, and Luca modified the code so that it did not depend on transformations. We asked him if it was possible to modify the code of other modifiers (such as Curve modifier) so that it did not take transformations into account, and he replied that it was probably possible to do so. So maybe instead of changing the whole Blender system, should be changed only the code where not applying transformations can cause problems.

1 Like

Alex is absolutely right. This makes no sense at all. When people ask me how tall I am, I rarely reply with “3.686 times the average european newborn’s height”. This is a UX problem and a major one. Everybody assumes that a mesh has a certain width, height and depth (or you can call it length x/y/z). And yes, of course you can just go to edit mode, start SCALING (same word) and now the numbers are changing accordingly, but god forbid that right next to an object’s location(in meters, feet or whatever unit) and rotation (in degrees or radiants), would you have a proper size property in the same units as the location is in! Yes, dimensions, …I’m sitting here on 2.92 and dimensions are nowhere to be found by default, they just appear after you have scaled the object in edit mode. Why is this even a problem is beyond me. You have an arbitrary size for each mesh, showing up nowhere after creation and a scalar to visibly change size, which is later not reflected in the actual numbers. WTF?

units ? for scaling ? it’s a multiplier, units don’t make sense in this context

I’m not sure I follow you correctly. Do you propose to remove the additional scaling altogether for only scaling a mesh along its vertex scales?
image
The Dimensions panel for the bounding box of a mesh has always been there. Scaling an object in object mode is an additional modifier on top that is independent from the relation of the vertices to each other. If that scalar would not exist we wouldn’t be able to scale an object along a certain axis and retrieve its original state back. Or scale instanced objects independent from each other.
Also think about exchanging with other software. Scales are usually multiplied or divided in 10x values because different software is using different metrics.

Sorry for posting this as a separate reply but I think it’s not so much a specific comment but more of a general thread reply on its own.
First of all: The scale factor is not a magical unicorn and it’s not special to Blender. It’s there in Max, Maya, Cinema4D and Modo as well. All software I’ve worked with over the years.

I don’t want to make this post about how to use the object scale, though but to propose something I have only seen in Cinema4D so far and never understood why no other 3D software I know of has incorporated it into their workflow, because it’s so damn useful, intuitive and fast to work with.

In Edit Mode show the selection median not only for the location of the current selection but also the size!

Sorry for stressing it that much but I find this so damn important.

What I mean in more detail:
When I’m in edit mode and make a selection of edges, polys or vertices the Transform panel shows the median of my current selection. I know where it is located in space, either globally or (wit the object scalar included) in relation to my object pivot. It’s this and it’s super useful:
image

What we need in addition to this is actually a second panel which does the same thing for the scale of any current selection. This would probably even help with the problem @AlexHoffman described in the initial post. Something like this:
image

When the user is in edit mode and makes a selection this should immediately display the scale of the selection bounding box and be editable just like the median position. This has major benefits:

  1. You can actually see immediately if some vertices have been pushed from the grid when you want to do precision modeling. If you know that a selection is supposed to be 2m tall but is 2.03m for some reason you can see that immediately simply by selecting the polygon(s) in question.
  2. You can measure heights super fast in edit mode and without any additional tools. Yes it only work for bounding box measurements but for anything else there are tape measures and using the bounding box scale is surprisingly frequent already enough to work with.
  3. You can remedy irregularities super quick without ever leaving edit mode.
    It’s super easy to scale a selection back to zero in one axis without using the “S → [axis] → zero” shortcut succession and then place it by it’s Median Position just as quickly.

This might also eliminate some of the problems some people are having when losing track of scaling in object and edit mode (which is something a user should learn irregardless, though) - you can see and edit scales of meshes which are part of a larger mesh and could otherwise not be measured as easily.

It’s also IMMENSELY useful for game artists creatig tileable geometry, as this has to be created on the grid to snap perfectly.

4 Likes