Where Blender's Axes are Defined?

It doesn’t look like anything you can define with simple rules, you need to read and understand the code to find the relevant lines.

Swapping around the labels is not going to work well I expect, it’s fragile and confusing that way. You need to think about the places that assume an up-axis. That’s things like view navigation operators, object add operators, cameras, viewport grid drawing, and importers and exporters. The gizmo display and transform operators are largely independent of up axis, as far as I know.

Usually it is easiest to identify relevant operators and properties, and then you can search the source code for them. For example, View > Viewpoint > Top has description Use a preset viewpoint. If you search for that, it will lead you to view_axis_exec, which has the implementation of this operator.

Thanks, Brecht. I’ll start there and see where it leads.

I realize this is not a trivial undertaking. In fact, I’m not likely a good enough programmer to pull it off (and even if I were, I’m almost 65 and likely to die long before the task is finished.)

But if I can at least get a start on it, perhaps I can convince some other programmers to jump on board.

Again, thanks for giving me a starting point.

One more question…
Where is the entry point? I’ve found no less than five occurrences of main(int argc, char **argv) in the code.

There’s various tools being build that all have their own entry points, for the main blender application the entry point is in source\creator\creator.c

1 Like

A much worse problem than finding all the places in the code where “up” is defined is finding all the places where it is assumed that one has a “right-hand” coordinate axis system versus a “left-hand” one. See https://www.evl.uic.edu/ralph/508S98/coordinates.html

Some DCC and game frameworks use one, some use the other. Finding the places in the code where Blender assumes right-hand is an almost impossible task. I’ve written a lot of code myself and the assumption is buried in tests that look at the results of cross products and make decisions based on whether those values are positive or negative. Unless one decides from the very beginning of programming that one wants to handle both, it is a massive effort to find in retrospect all of the places where that assumption is made.

2 Likes

I’d imagine that would be quite a deep rewrite, basically starting almost from scratch if one were to try to add that kind of thing. Probably not something most people would gain a whole lot out of though.

Yup. You guys are both right; it’s definitely a daunting task.

And this is way over my head. So instead, I decided to research why there are opposing views on which axis should be vertical. While doing that, I was also reviewing the Humane Rigging series and came up with an idea that just might make this more doable than one might think…

@brecht: Perhaps you could look this over and comment on whether it sounds plausible. It might even work as a GSoC project.

Projection Space: A Proposal to Simplify Global Axis Orientation Variability

Proposal

Add a new level of orientation above Blender’s world. For the sake of this discussion, I’ll refer to this level of orientation as a ‘universal projection.’ This universal projection would act as a parent container for Blender’s 3D world and by rotating the universe (parent), the world (child) can have any axis orientation desired. And because the axis gizmo remains part of the world, not the universe, both preferences for the up axis can be accommodated with minimal coding effort.

Origin of the Idea for a Universal Projection

While re-watching Nathan Vegdahl’s Humane Rigging video series (specifically, the Building a Better Ball Rig with Empties video) I watched as Nathan illustrated parenting as a rigging tool when he made a ball the child of a cube. He then reoriented the cube to 90 degrees and when he subsequently moved the ball along its x-axis, it moved in world space along the z-axis.

This made me wonder if Blender could encapsulate the entire 3D world inside a new higher-level entity—a 3D universal projection—that could be rotated. Once the universal projection is rotated, the world (as a child of the universe) will have its axes oriented in whichever way makes the user feel most comfortable.

But Why Bother?

Blender users have been discussing the relative merits of axis-orientation for at least two decades. Some prefer the z-axis as the vertical orientation while others prefer the y-axis. Maya offered the World Coordinate System option (switching from y-up—the default—to z-up) as early as 2008 and may well be unique in having this feature. But why do these two differing orientations exist to begin with and why do some prefer one over the other? Also, can understanding the differences in viewpoint help clarify why certain users believe it’s important to have a choice while others don’t?

Where these Divergent Perspectives Originated

Math and physics have, for thousands of years, worked with the z-axis oriented vertically. As far as can be ascertained, this began with Euclid and this orientation is known as absolute, unitary space . In the 17th century, Sir Isaac Newton concluded that absolute, unitary space is inaccessible to the senses ( The Hippocampus as a Cognitive Map , section 1.2 Newton, Leibniz, and Berkley , p.11). From this we can extrapolate that there is another way to perceive space which, in the book just cited, is referred to as psychological or egocentric space .

Other fields that use the absolute, unitary space model are architecture and, more recently, some game engines. In these professions, the z-axis is vertical.

Studies at Harvard University’s Center on the Developing Child implicate that awareness of egocentric space derives from two systems in our brains. The first of these is our proprioceptive sense which allows us to differentiate between right and left. In concert with that, the vestibular system uses fluid in the inner ear to make us aware of gravity and therefore gives us our sense of which way is up. The result of this is that our awareness grows from these two systems to give us a sense of right and left, up and down.

Professions that use the egocentric space model as defined by Newton are film, animation, and UI programming. The latter is, of course, because the computer display can only be measured in two dimensions.

Based on this, I submit that the difference between the two user viewpoints is:

z-up users are more oriented to absolute, unitary space, and

y-up users are more oriented to egocentric space.

Which came first, Absolute or Egocentric Spatial Orientation?

Conceptually, absolute space has been around since Euclid. Of the two spatial orientations, it’s the more advanced because it was conceived by a scientific mind to better understand our world and our universe.

Developmentally, our proprioceptive and vestibular senses orient us to egocentric space while we’re still in the cradle. It follows that unless or until a person is indoctrinated into the more scientific viewpoint, she may very well feel awkward in a z-up world.

1 Like

This would unfortunately be harder, and also impact performance for the worse.

1 Like

I still don’t understand why any of that would even be necessary. This is a perception issue, not an engineering problem. Change the blue arrow to green, refactor the Vec3’s in the UI drawing code to understand a swizzle is happening, and you’re 80% the way there. I’m oversimplifying of course, but what you describe is a moonshot, and not because we want to land on the moon, but because we want a nice photo of the moon. we have telescopes for that, let’s not over-engineer this.

The correct solution is not as hard as it may seem, but this is the wrong one. It’s pushing the problem into Python scripts, shader nodes, drivers, and many other areas that do not need to be aware of the up axis. The solution should only affect code where the up axis actually matters, mainly view navigation and import/export.

This has sprawled a very interesting topic of discussion. From a fairly outsider perspective, I’d add that, in my user experience I’ve always had issues of remembering to switch to the y-up mindset when interacting with UI objects (keyframes, strips, UVs, etc). Having a universal y-up would certainly unify such issues on that end.

This is of course a very high level and non urgent benefit, but I found it worth mentioning.

Is there anyone familiar enough with the source who might come up with a plan that could work? I’m not nearly experienced enough in coding 3D stuff to do it, but I’m willing to pitch in and help with the grunt work if ever a solution is found.

from Maya help

In animation and visual effects, the tradition is to use Y as the “up” or elevation axis, with X and Z as the “ground” axes. However, some other industries traditionally use Z as the up axis and X and Y as the ground axes.

Y-up tradition has come from 2D era, when XY was representing a screen coordinates.
It lasts in animation and effects like imperial units in the world, because animation and effects still have mostly 2D output.
3D made a french revolution, putting Z to “gravity direction”, based on mathematical model, independently from 2D software.
For example, 3Dsmax don’t have the ability to set up world to Y-up.

As a former user of Maya who actively set Y-Up to Z-Up in the Maya’n world I recall the trouble I ran into back then. For instance, as soon as you made the switch, the Viewport navigation broke - the Camera that was used to draw the Viewport was still navigating in Y-Up space, as it had been created when Y-Up was still set. You had no other choice than restart, delete the Camera, create a new one.

Then I realised all physics were broken - they were hardcoded to Y-up, so gravity worked sideways from now on - bummer. External scripts broke. Bummer. etc. etc.

That damn axis affected so many areas that you always had to think twice if you really want to change it. I’m not surprised @brecht states it’s hundreds of lines of code. We had to do it, as our main CAD tool - Alias - was also Z-Up.

In Blender, many importers and exporters offer an axis matrix rotation (setting the target up and forward axis basically). This can get you a long way and is a rather hassle-free approach.

1 Like

Yes, industry standard software have a lot of different implementations, that was made one day for local workflows, but influences globally.
Like y-up world rotation in Maya, or rescale scene units in 3dsmax, when you have to guess what setup will not bring a mess, and stick around certain solution untill you will face a scene, made by other user with different approach.

Blender scene keeps to be uniform for everyone, and this is kind a good.

However difficult it is to change, for science, there is no discussion, unfortunately. in front view, x runs right, y runs up and z come towards you. That is how ALL mathematical equasions are written. As I want to implement the BUI - Blender User Interface → scientific visualization tool, I need to adhere to the scientific conventions, which can NOT be ridiculed. (Unless your name is Ton, then you can laugh with the conventions :wink:

So enough ping-pong, where is the code?

Sigh, no they’re not. You can find literally all different up axis conventions in science, depending on the domain you’re working in. Plus what you describe is a right-handed coordinate system and there’s also a thing called a left-handed coordinate system that is mirrored.

As Brecht already explained there is no single place in the Blender code where “up” is defined. There’s several places where that direction is assumed implicitly or sometimes hard-coded.

2 Likes

No. It’s not. Here’s a write-up on the Wolfram math site.
Z-Axis

1 Like

so where should I start looking?