Removing confusing "scale" mechanics from Blender

Based on the amount of confusion the mesh “scale” mechanics caused me personally and thousands of other people, I can safely say that it is the worst thing that happened to Blender. So why not get rid of it?

Let’s say we have a cube 1m x 1m x 1m.

When you increase the width of that cube to 2m the only thing that should happen is the cube should now have the following properties: 2m x 1m x 1m, which makes perfect sense, right? But instead of this Blender creates some weird abstract called “scale” that is monkey-wrenched to mesh data in some magical way that no one understands (according to the amount of confusion it raised on the internet).

I wanted to use blender for precise engineering, but I gave up after spending several hours figuring out why some of the meshes in my project had the wrong properties and other meshes didn’t.

I mean, explain this nonsense:

This image demonstrates why I think this “feature” should be removed from Blender. I added a ruler in the object mode and then another one in edit mode and I don’t know what to believe. This block should be about 0.015m in height but after manipulating different properties of the object it now shows different confusing numbers everywhere.

I don’t know why we have to do these magic manipulations, but I applied the scale for every object after manipulating it, and it helped in some cases but didn’t do anything in others. And how am I supposed to back trace it to the beginning and figure out how this abstraction works and what I need to do to fix this problem that shouldn’t exist in the first place?

When I change the size of a mesh I should expect Blender to just change the size of that mesh. So why is Blender creating a magical unicorn instead? And why do we all have to spend hours learning how to use that magical unicorn?

If I want to “scale” a 1m x 1m x 1m cube and make it twice as big in size, I should change its width, length, and height to 2m x 2m x 2m, and after that all the modifiers, textures, UVs, measuring tools(!), etc. should simply update and reflect that change, right? If I change mesh property length, Blender should change the property length of that mesh. It cannot be any easier.

I love blender, but this feature makes me angry both as a developer and as a Blender user.

Pros of removing “scale” mechanics from Blender:
Objects will have precise properties in any context, without any invisible magic layers on top of it.

  • It will simplify the codebase and increase the rate of development
  • It will remove an unnecessary confusing layer of abstraction and decrease the amount of bugs
  • It will save developers and digital artists millions of hours figuring out how this layer of abstraction works
  • Blender will become usable for precise engineering
1 Like

you know you can apply scale right after changing dimensions, right?

As a side note- I disgree entirely here. Blender will never be a viable option for ‘precise engineering’ because at the end of the day it’s still using floating point math and will never have the dimensional precision necessary for engineering purposes. It’s just the wrong tool for the job in this situation.

1 Like

If it was that simple, there wouldn’t be thousands of questions on the internet related to “scale” directly or indirectly.

This “scale” abstraction mechanics should not exist. If I scale a mesh, Blender should just update the properties of that mesh directly, that’s it. If it’s doing something else, the algorithm is wrong. Object properties should always be reflected in a precise direct way.

  • Why should I have to remember to do this thousands of times after every single manipulation?
  • What happens if I forget to apply that manipulation right after and then do other manipulations, will that break everything?
  • If it breaks everything how can I fix it if I cannot undo to that particular step?
  • What if applying changes didn’t help and it still shows me what I highlighted on the image in the post?
  • How should I back trace what I did wrong and fix it? And why should I have to do it in the first place?
  • Do I need to do that when I add multiple meshes together?
  • Do I have to do that just for the scale or for location as well?
  • When should I apply only the location and not other properties?
  • When should I apply only the scale and not other properties?
  • Do I have to do that in Edit or Object mode?
  • What if I forget to do that and change the object in Edit mode and cannot undo to that step, can I apply the changes after, or not?
  • Do I have to do that for every manipulation or are there some particular situations I shouldn’t?
  • When should I not apply changes at all, what are the exceptions?

I don’t think the problem is as large as you’re making it out to be. This concept is not unique to Blender.

This fundamental concept also exists in the other DCCs like 3DS Max and Maya as well. Over there applying scale/rotate/translate is called “Reset XForm” for Max and “Freeze Transforms” for Maya. Their “edit mode” equivalent is either called “sub object mode” in the case of Max’s EditPoly modifier or Component mode for Maya. They behave similar.

Object scale affects the object’s dimensions and those are simply different than the dimensions of the actual mesh data.

One concrete example: Model a table. ALT-D duplicate that table so that it shares mesh data with the first; i.e. it has the exact same shape. Now scale, in object mode, that second table to make it larger. This MUST be possible to do without also scaling that first table. And it is possible because of the separation of scales.

What do you mean? If you want the forked object to reflect all the changes of the original, duplicate it with ALT+D, but if you want it to be an independent object duplicate it with SHIFT+D. What’s the problem?

If you want Blender to share memory for these 2 objects but still be able to change some of its properties, there should be a “linked fork” button or something which would tell Blender to create a linked object and create override variables for changed properties:

  • When you need it, you would click the “linked fork” button, which duplicates object table_1 into table_1_fork_1
  • Change that table_1_fork_1 dimensions from [2,1,1] to [3,1,1]
  • Blender would create an override variable for dimensions.x without allocating memory for table_1_fork_1. That’s it.
  • That forked object will have dimensions [3,1,1], without any “scale” magic.

I would love to know what kind of bizarre workflow you’re using that requires thousands of mesh-level scale operations from object mode. And where are these thousands of other people who are having trouble grasping how scale works? Literally every other DCC package works this way, Blender is not doing something crazy here. The most common scale related problem I’ve seen as it applies to Blender is when people try to bevel edges on a mesh without unified scale. If you want to talk about where Blender could improve the workflow for new users lets start there, but remove Scale altogether? That’s just crazy talk, why not remove translation and rotation while we’re stripping away basic 3d principles :stuck_out_tongue:

A lot of this confusion comes from not understanding Blender’s Object->ObjData structure.

If you are using Blender to do precision engineering, you have the wrong tool.

tl:dr: change your objects in Edit mode.

1 Like

I second this, and I’d like that we remove rotations as well, they’re too confusing

Hi Alex. What you are asking for and all these questions show that you are not yet familiar with the basics of 3d math or 3d tools in general. And your proposal to remove scale is not good for quite some reasons, but explaining all that will lead too far here. I don’t think you will get answers here to all you questions. And you should really search for a beginners book of your liking on a specific tool or the underlying math to ease your start with whatever 3d tool you will choose. The topic your are touching here is as the others stated also really basic and essential to all 3d programs. But to give you a start to get some understanding, each mesh has an origin and vertices are defined relative to that origin. The origin is normally at the center of a the meshes own “universe”. Thats the model space. That makes things easier to calculate later. Between these vertices we define edges or faces and thats what we call a mesh.

The other thing are matrices. Simply spoken, with matrices we can move rotate and scale these modelspace vertices in 3d space. A matrix can be seen as a group of location offset, rotation and scale, but it has also other nifty features. Matrices are an important concept to make the needed calculations to modify meshes in a 3d space fast, consistent and efficient to combine, thats needed if you are eg working with hierachies and if you are resuing meshes in multiple locations. It’s needed for all sorts of needed transformations, like the transforms into camera space and quite some different spaces too. You don’t see that, but to sticking to these principles is not just for editing your cube in size.

And just picking a single feature like scale and dropping that does not lead to functioning math for the rest. There are alternatives to matrices, but these are not easier to grasp.

It’s one thing if you want to see object dimensions in blenders edit mode as sort of comfort feature, what would solve your hassle, but asking for the removal of the scale factor of a matrix is really another and makes really no sense.

My coffee is empty. I hope you got enough info for yourself to get the ball rolling. Good luck.


Removing scale from Blender is not possible, we’d be unable to properly interoperate with other applications and file formats.

However, what is possible is making it so that scaling operations in object mode apply to vertex positions by default (or with a preference). This is done in some other applications, and makes some sense in my opinion, though it does come with its own set of issues.

It corresponds a bit more with how you think of objects in the real world. Moving or rotation something is not what you would consider modifying the shape of an object, while scaling would be considered deforming the object itself.

1 Like

@brecht: Thinking about numerical stability, I’d not suggest to apply the scale to the vertices directly. But might be nice as an option,yes. What would make most sense to me here is to add the an extended numerical view/panel in edit mode to work on all vertices or selected parts. One that allows to modify vertex positions, rotation and scale and also allows for modifying values of the dimension relative to an axis aligned bounding box ( optionally shown in worldspace or modelspace).

A user would just have to switch to edit mode to leave the scale untouched.

I agree supporting dimension editing in edit mode makes sense.

I think experience shows that just telling users to switch to edit mode is not working that well. They can do that now, but regardless of the the reason why, it’s not happening in many cases.

1 Like

Yes , that’s something I can believe, and as I said, if it’s just an option I also could live with an option for object mode. But it introduces alot of questions and problems. Like how do you proceed with instances. Do you propagate the change blindly. Do you correct their scale in the instance matrix to eliminate changes. What if you change it often that way and eg coplanar regions have their normals deviate from each other. I think writing it back to the vertices is better for parametric objects, what blender doesn’t support.

These drawbacks are also part of doing the same in edit mode , but its rather expected here. A visual editor for numerical values is to my view something people are able to understand and it would be a comfortable and handy tool in edit mode. But I might be wrong with how well this would be understood.