2020-10-15 Animation & Rigging Module

Present: Bassam Kurdali, Christoph Lendenfeld, Demeter Dzadik, Joseph Brandenburg, Luciano Muñoz, Octave Crespel, Sebastian Parborg, Solid Snake, Stanislav Ovcharov, Sybren Stüvel, Wayde Moss

The main goal of this meeting was to identify the most pressing bug fixes & papercuts to work on in the Animation & Rigging module , in order to make 2.91 as polished as possible, and to identify some goals for 2.92


Short-term goals

High-prio short-term goals from last meeting that are still open:

Chosen as highest priority short-term goals are:

  • T80296 Fix Animation Channel and Bone Selection Sync . After investigation, it turns out that selection synchronisation between animation channels & the thing they animate is rather complex, and needs a proper redesign. For now, Sybren will at least fix the issue that sometimes channels completely unselectable.
  • D6379 Add a Un-Bake FCurves operator
  • User preference to turn off bone group colors in the animation channel list. These colors are often distracting, and can only be turned off per editor (and are on by default). Sybren will make a task for this.

Goals for 2.92

This list is not exhaustive, next meeting will extend the list.


  • Parenting (plain, “keep transform”, “no inverse”) is confusing now. This isn’t helped by the fact that the “parent inverse” matrix is hidden from the user interface.
  • Add the ability to have Boolean custom properties.
  • Bassam and Demeter expressed a desire to have Boolean and Enum custom properties. These are possible to create via Python, but not through the UI. Ideally, custom properties would be on-par with the custom Python properties (creatable via bpy.props).
  • Manually order custom properties, f.e. via drag & drop of modifier UI.

Test builds for Animators & Riggers

It is often desired to create test builds for Animators & Riggers. Various approaches were discussed, and a temporary good-enough one was chosen: create a branch for each patch, and if it’s desired to have a build for multiple patches at once, merge those branches into another branch. This is a bit ad-hoc, but at least it allows us to easily remove patches from the test builds.

This will be revisited when James Monteath (DevOps) starts working at Blender Institute.


Various approaches were discussed to make quaternions more understandable and mathematically correct at the same time:

  • Bassam: Interpolation on individual channels no longer makes sense. A widget in the 3D view could be used to adjust all channels of a selected keyframe at once.
  • Sebastian: Visualise angular velocity in graph editor. Control the start/end orientation in the 3D view.
  • Octave: Visualise multiple keys on the same channel via lines on a sphere (see Chapter 5).

Christoph’s UI Improvements proposal

Proposal: New Graph editor key manipulation operators

Everybody is enthousiastic, should happen in collaboration with the UI team. Christoph offered to first clean up existing code to remove duplicates, then expand functionality.

Next Meeting

Thursday 29 October 15:00 CET. The link to the agenda will be posted in the #animation-module channel before the meeting.


I have started a thread to discuss the technical side of selection sync: Responding to selection changes


Regarding quaternion rotations:

Interpolation on individual channels no longer makes sense.

Since I wasn’t at the meeting (I don’t generally have time at the moment, unfortunately), I don’t know what the rationale was behind this. But I do want to say that I would be very sad if this were removed from Blender. Treating the quaternion channels individually is one of my favorite things about animating in Blender vs other software (e.g. Maya, and XSI back in the day). I find it far more straightforward to work with than approaches that try to turn quaternion rotations into black boxes that function differently from any other animation property. Also from a rigging perspective, there are cases where it’s useful to put drivers on the quaternion components (which I touched on a bit in Humane Rigging).

I do think including other interpolation approaches in addition to per-channel makes a lot of sense.
Including, potentially, for things other than just rotations. And for quaternions in particular, having SLERP and SQUAD interpolation would be useful for specific situations.

So I’m certainly not at all opposed to providing more ways to manage rotations or animation interpolation more generally. But I think individual quaternion channel interpolation has real advantages, which we shouldn’t throw away so lightly.


I agree with you wholeheartedly. Having Quats directly is important. I don’t think anyone wants to drop them – just want to fix the interpolation.


The thing is their interpolation only makes sense it they’re synchronized. If the user changes one channel independently of the others, it screws with the interpolation. Afaik

Ah, okay. The “no longer makes sense” bit gave me the impression that it was seen as something to be deprecated and/or removed. As long as that’s not the case, I don’t have any issues. I’m very supportive of implementing additional ways to handle rotations, as well as transforms and animation in general.

It makes the interpolation different, for sure, but that’s the point of tweaking animation curves. Whether it screws with the interpolation in a bad way or not depends on the tweak and what the intention is, which is the case for all animation curves.

I don’t want to over-state my case, however. I agree that the quaternion curves aren’t especially intuitive to work with. But they’re not “wrong” in any meaningful way, they’re just another way to interpolate. And they give a direct way to work with and tweak quaternion rotations that otherwise is wholly unavailable. And I think it would be a shame to lose that.

That’s the point, when the (w, x, y, z) of a quaternion no longer follow √(w²+x²+y²+z²) = 1, they don’t represent a valid orientation any more. All the math to use such a quaternion to rotate breaks down. Of course Blender can scale the quaternion so that it has the correct length again (and I think that this is actually what happens now as well), but I don’t understand how breaking the math is a good thing to want.

To put things into perspective, I totally understand the need for animators to have full control over rotations. I also appreciate the need to see what’s going on from frame to frame, for example to see how smooth a curve is, or how fast a bone is going to rotate towards a certain orientation. I’m just not convinced that breaking quaternion math is the only way to achieve this.


I think it’s a bit hyperbolic to say that it breaks the math. Doing a direct component-wise interpolation followed by a reprojection onto the unit hypersphere ( √(w²+x²+y²+z²) = 1) is a perfectly valid interpolation operator. As long as the quaternion multiplication itself is done with a unit quaternion, the math is valid for rotation. I realize it’s not as mathematically elegant as a proper SLERP or SQUAD, but that’s not the same as being broken.

Again, I am 100% in favor of also adding SLERP and SQUAD. And I’m 110% in favor of exploring ways to make transform animation (including rotations) more intuitive and practical for animators. And if/when such a time comes that the graph editor becomes obsolete for animating transforms generally (which I would honestly love to see), I’ll be happy to see the component-wise interpolation disappear then. But until then, I think it’s a legitimately useful interpolation approach, and IMO is actually one of the advantages Blender has over other 3d applications (though admittedly a minor one).

1 Like

In the 3D viewport and N Panel, editing one of the non-scalar components of the quaternion will affect the other non-scalar components automatically, and the quaternion remains normalized. Can’t we do that in the graph editor, somehow? We might need to require a keyframe for all four channels of a quaternion and remove the ability to key them individually if we go for that route. But it should definitely be possible to adjust them individually, in my opinion.

Personally, I’m really grateful for consideration of quaternions as one 4D thing rather than four 1D things. No, of course I would have no problem with a quaternion(component) mode, but I’d never ever use it.

I think sometimes people forget that normalized component quaternions have impacts that aren’t immediately obvious, and these impacts are exactly those kind of frustrating things that show up only in your interpolation, like, after you’ve rendered and are watching it for the third time and you finally notice, “Hey, why the heck is that bone doing that weird thing between my keyframes?”

I recently troubleshooted something for somebody where they had a quaternion armature action, all on its own in the NLA, with a blend-out. It ended with a negative W on the root. So the blend-out interpolated all the way from the negative W to to a 0,0,0,1 quaternion. Taking their root through a whole extra rotation.

I don’t know what the best fix for that is. I made a new strip, no blend, giving the root a 0,0,0,-1 rotation, for the original action to blend into, which fixed the problem. But if that’s the fix, it’s really too much to ask most Blender users to understand that problem and that fix. And good thing it was their root, or else they might not have noticed as soon as they did.

The important thing is the interpolation. Does it matter if people input malformed quats into the sidebar? Not at all, not unless they’re zero vectors or something. Sure, normalize them. Hell, it doesn’t matter if they enter Eulers and you turn them into quats. (So I can’t understand what the concerns are regarding sidebar or 3D viewport input.) Nor does it have anything to do with reading drivers in quat mode. (Writing drivers for quats, in a way that’s going to make sense, is something else entirely, but most people aren’t going to be competent at that regardless of whether you’re talking components or not. A proper quat driver needs to change at least two values at the same time, in a related fashion, something the existing driver interface isn’t very good for.)


Kudos for this meeting approach. This should be implemented in all modules.


I think we’re in agreement, then? As I’ve stated multiple times, I’m in favor of adding additional ways to handle quaternions. And to further emphasize, I do mean in favor of, not just neutral. I think it would be a boon to Blender’s capabilities. The only case I’m trying to make in this thread is that the current approach is not broken, has merit, and should also be kept.

I recently troubleshooted something for somebody where they had a quaternion armature action […]

The problem you’re describing here–assuming I’m understanding it correctly–has nothing to do with per-component interpolation, and would still be problem with e.g. SLERP or any other interpolation approach as well.

The fix would be for the NLA code to check for quaternions that are > 180 degrees apart, and flip one of them if necessary before blending them. I think that would probably be a good thing to do… or at least, off the top of my head I can’t think of any situations that would meaningfully break. But, again, nothing to do with the interpolation approach used.

Sorry, yeah, it was kind of a response to the entire thread, and not trying to say anyone is wrong, but adding a different viewpoint.

I didn’t realize that it wasn’t a slerp problem, I appreciate you clarifying that. Although then the question is, why is Blender providing orientations with -W on, for example, trackball operations? That’s not a good idea, considering it breaks one of the major reasons to use quaternions. (I don’t think it always worked that way…) And then, you can’t just reverse the W, presumably because there’s not enough precision for the quaternions in the sidebar…

Edit: and it seems like the reason that it gives a -W is because interpolation between two positive Ws is not necessarily the shortest path rotation either… Shouldn’t a SLERP between two well-formed quats be the shortest distance along a great circle of a sphere? If not, what steps can one take to get that shortest path rotation? Edit2: Isn’t handling rotation from -w to +w why slerp implementations ( https://en.wikipedia.org/wiki/Slerp ) have the code to reverse direction in the case of negative dot products?

1 Like

if only we had a brilliant new member with a special focus on the NLA…

I wasn’t able to reproduce this with a strip in Replace blend mode. Can you give more information? A blend file would be best so I don’t have to guess what your exact setup as.

Sure. I think I can give a minimal version of the file I was provided, as it was originally posted somewhere publicly. (Public models and Mixamo animation, the guy’s just starting out.)


Note the NLA: current edited action has no influence; invertW strip starts muted. Play through, notice the rotation. Unmute invertW strip to see the fix I recommended.

Ahh I see. So what’s going on is that your NLA strip will be blended with the quaternion default properties per component (w=1, xyz=0), so it will linearly interpolate from w=-1 to w=+1, taking you through the 360 degree rotation. The NLA system currently does not blend quaternions within Replace strips as a quaternion, but as separate components. I’ll work on the patch, it shouldn’t be too much of a problem.

Edit: I’ll make a proper bug report and possibly also a design task within the next few days to properly discuss the bug and fix.


The short answer is: for the same reason it will keep adding degrees for Euler rotations, rather than wrapping back to zero after hitting 360. It’s trying to preserve the rotations as best it can, and quaternions are capable of representing 720 degrees of rotation.

The long answer is: I think there’s definitely room to discuss having them work differently, but there are subtleties to it that might not be immediately apparent. For example, simply not letting W ever be negative would, on its own, cause worse problems than it would solve.

As I understand it, that isn’t actually part of SLERP itself (and the wording on the Wikipedia page seems to confirm this). SLERP on its own will happily go the long way around, just like per-component interpolation.

Flipping one of the quaternions if needed before interpolating is something that can be done for any interpolation scheme (including per-component), and in fact is precisely what I was suggesting as a fix for your NLA situation. In cases where the user isn’t directly manipulating quaternion values, I think that’s probably(?) always a good thing to do when interpolating/blending, regardless of interpolation scheme. And the NLA is a great example of that.

All of that aside: my impression is that a lot of people seem to think that SLERP, SQUAD, etc. are somehow more “correct” and will magically fix all of their problems. But that’s not the case at all. The reality is that they’re just additional ways to interpolate, which bring with them their own set of trade-offs. They are absolutely useful, so I want them in Blender as well. But they’re not magic bullets, and won’t make quaternions rotations “just work”.

If you (or anyone else) is curious to learn more, I highly recommend these two articles by Jonathan Blow:

(He comes at this from a game-dev perspective, which is a bit different than production animation. But I nevertheless think these are some of the best articles on the topic.)