That’s the point, when the (w, x, y, z) of a quaternion no longer follow √(w²+x²+y²+z²) = 1, they don’t represent a valid orientation any more. All the math to use such a quaternion to rotate breaks down. Of course Blender can scale the quaternion so that it has the correct length again (and I think that this is actually what happens now as well), but I don’t understand how breaking the math is a good thing to want.
To put things into perspective, I totally understand the need for animators to have full control over rotations. I also appreciate the need to see what’s going on from frame to frame, for example to see how smooth a curve is, or how fast a bone is going to rotate towards a certain orientation. I’m just not convinced that breaking quaternion math is the only way to achieve this.
I think it’s a bit hyperbolic to say that it breaks the math. Doing a direct component-wise interpolation followed by a reprojection onto the unit hypersphere ( √(w²+x²+y²+z²) = 1) is a perfectly valid interpolation operator. As long as the quaternion multiplication itself is done with a unit quaternion, the math is valid for rotation. I realize it’s not as mathematically elegant as a proper SLERP or SQUAD, but that’s not the same as being broken.
Again, I am 100% in favor of also adding SLERP and SQUAD. And I’m 110% in favor of exploring ways to make transform animation (including rotations) more intuitive and practical for animators. And if/when such a time comes that the graph editor becomes obsolete for animating transforms generally (which I would honestly love to see), I’ll be happy to see the component-wise interpolation disappear then. But until then, I think it’s a legitimately useful interpolation approach, and IMO is actually one of the advantages Blender has over other 3d applications (though admittedly a minor one).
In the 3D viewport and N Panel, editing one of the non-scalar components of the quaternion will affect the other non-scalar components automatically, and the quaternion remains normalized. Can’t we do that in the graph editor, somehow? We might need to require a keyframe for all four channels of a quaternion and remove the ability to key them individually if we go for that route. But it should definitely be possible to adjust them individually, in my opinion.
Personally, I’m really grateful for consideration of quaternions as one 4D thing rather than four 1D things. No, of course I would have no problem with a quaternion(component) mode, but I’d never ever use it.
I think sometimes people forget that normalized component quaternions have impacts that aren’t immediately obvious, and these impacts are exactly those kind of frustrating things that show up only in your interpolation, like, after you’ve rendered and are watching it for the third time and you finally notice, “Hey, why the heck is that bone doing that weird thing between my keyframes?”
I recently troubleshooted something for somebody where they had a quaternion armature action, all on its own in the NLA, with a blend-out. It ended with a negative W on the root. So the blend-out interpolated all the way from the negative W to to a 0,0,0,1 quaternion. Taking their root through a whole extra rotation.
I don’t know what the best fix for that is. I made a new strip, no blend, giving the root a 0,0,0,-1 rotation, for the original action to blend into, which fixed the problem. But if that’s the fix, it’s really too much to ask most Blender users to understand that problem and that fix. And good thing it was their root, or else they might not have noticed as soon as they did.
The important thing is the interpolation. Does it matter if people input malformed quats into the sidebar? Not at all, not unless they’re zero vectors or something. Sure, normalize them. Hell, it doesn’t matter if they enter Eulers and you turn them into quats. (So I can’t understand what the concerns are regarding sidebar or 3D viewport input.) Nor does it have anything to do with reading drivers in quat mode. (Writing drivers for quats, in a way that’s going to make sense, is something else entirely, but most people aren’t going to be competent at that regardless of whether you’re talking components or not. A proper quat driver needs to change at least two values at the same time, in a related fashion, something the existing driver interface isn’t very good for.)
I think we’re in agreement, then? As I’ve stated multiple times, I’m in favor of adding additional ways to handle quaternions. And to further emphasize, I do mean in favor of, not just neutral. I think it would be a boon to Blender’s capabilities. The only case I’m trying to make in this thread is that the current approach is not broken, has merit, and should also be kept.
I recently troubleshooted something for somebody where they had a quaternion armature action […]
The problem you’re describing here–assuming I’m understanding it correctly–has nothing to do with per-component interpolation, and would still be problem with e.g. SLERP or any other interpolation approach as well.
The fix would be for the NLA code to check for quaternions that are > 180 degrees apart, and flip one of them if necessary before blending them. I think that would probably be a good thing to do… or at least, off the top of my head I can’t think of any situations that would meaningfully break. But, again, nothing to do with the interpolation approach used.
Sorry, yeah, it was kind of a response to the entire thread, and not trying to say anyone is wrong, but adding a different viewpoint.
I didn’t realize that it wasn’t a slerp problem, I appreciate you clarifying that. Although then the question is, why is Blender providing orientations with -W on, for example, trackball operations? That’s not a good idea, considering it breaks one of the major reasons to use quaternions. (I don’t think it always worked that way…) And then, you can’t just reverse the W, presumably because there’s not enough precision for the quaternions in the sidebar…
Edit: and it seems like the reason that it gives a -W is because interpolation between two positive Ws is not necessarily the shortest path rotation either… Shouldn’t a SLERP between two well-formed quats be the shortest distance along a great circle of a sphere? If not, what steps can one take to get that shortest path rotation? Edit2: Isn’t handling rotation from -w to +w why slerp implementations ( https://en.wikipedia.org/wiki/Slerp ) have the code to reverse direction in the case of negative dot products?
Ahh I see. So what’s going on is that your NLA strip will be blended with the quaternion default properties per component (w=1, xyz=0), so it will linearly interpolate from w=-1 to w=+1, taking you through the 360 degree rotation. The NLA system currently does not blend quaternions within Replace strips as a quaternion, but as separate components. I’ll work on the patch, it shouldn’t be too much of a problem.
Edit: I’ll make a proper bug report and possibly also a design task within the next few days to properly discuss the bug and fix.
The short answer is: for the same reason it will keep adding degrees for Euler rotations, rather than wrapping back to zero after hitting 360. It’s trying to preserve the rotations as best it can, and quaternions are capable of representing 720 degrees of rotation.
The long answer is: I think there’s definitely room to discuss having them work differently, but there are subtleties to it that might not be immediately apparent. For example, simply not letting W ever be negative would, on its own, cause worse problems than it would solve.
As I understand it, that isn’t actually part of SLERP itself (and the wording on the Wikipedia page seems to confirm this). SLERP on its own will happily go the long way around, just like per-component interpolation.
Flipping one of the quaternions if needed before interpolating is something that can be done for any interpolation scheme (including per-component), and in fact is precisely what I was suggesting as a fix for your NLA situation. In cases where the user isn’t directly manipulating quaternion values, I think that’s probably(?) always a good thing to do when interpolating/blending, regardless of interpolation scheme. And the NLA is a great example of that.
All of that aside: my impression is that a lot of people seem to think that SLERP, SQUAD, etc. are somehow more “correct” and will magically fix all of their problems. But that’s not the case at all. The reality is that they’re just additional ways to interpolate, which bring with them their own set of trade-offs. They are absolutely useful, so I want them in Blender as well. But they’re not magic bullets, and won’t make quaternions rotations “just work”.
If you (or anyone else) is curious to learn more, I highly recommend these two articles by Jonathan Blow:
Thanks for your input, it’s certainly useful. And the current approach won’t be thrown out until we have something that works even better
If there is an issue with a description, an example file, and steps to reproduce, just use the bug tracker to discuss it
I think it’s great that people share feedback on the topics of the meeting, let’s keep that going. It’s just that when the discussion moves to very specific issues and their possible solutions, I think that the meeting notes may not be the best place for that. A design task on developer.blender.org would be a better place for such things.
I appreciate that! And sorry if I’ve been coming across as aggressive (re-reading some of my posts, I’m realizing that may be the case). That wasn’t my intention, nor my feelings behind them when writing them.
I think rotations generally (even aside from quaternions specifically) are a pain point for animators, and I’m very excited to see the animation/rigging team exploring better ways for animators to work with them!
Thanks for the info, and the links. I have found plenty of people who say, “Don’t use slerp.” What I’ve noticed is that these people tend to be game coders, who care deeply about performance, and are presumably creating their keyframes in a different app that probably isn’t using normalized component quats. If you have baked keyframes, then I can easily imagine that nobody would ever notice the difference, because your keyframes represent tiny rotational differences.
But when you’re actually creating those frames, you’re creating breakdowns from potentially large rotational differences, where normalized components are not going to give you smooth breakdowns unless you cut them exactly in half. Creating the animation data in the first place is different than interpolating that data in a game.
It looks to me like Blow says exactly that: “Hey, we’re sampling animation created elsewhere at 30hz, perceptual differences are minimal and, probably, not even inspected by the animator at a higher frequency. Let’s do what’s fast and what lets us blend without caring about blend order.”
I can see how one could consider “shortest path interpolation” as being something potentially separate from slerp. (I could also see how one could consider it part of slerp implementation.) Yes, I can see how you could do shortest path without slerp, or slerp without shortest path. But shortest path requires a temporal awareness that Blender doesn’t have right now, because to judge whether to flip the W requires knowing the last and next keyframes, not just a current interpolation. (For reasonable f-curves; for less reasonable f-curves, I think you’d need to know the last and next min/max, but my understanding isn’t advanced enough to be confident.) Even without shortest path interpolation, slerp requires that temporal awareness, and once you have it, shortest-path interpolation is easy (and smart; and no, I wouldn’t consider an “any-path slerp” to be half as useful.)
Obviously, moving from normalized-component doesn’t magically make anybody a better animator. But it does mean that if you want, you can properly animate a clock with quaternion hands at 4, 8, and 12, and not need a math lesson to understand why it’s not correct at 5 o’clock. And when used in conjunction with shortest path, as I’ve seen it done in previous animation software I used, it solves a lot of problems with character animation that otherwise involve unintuitive solutions. Like the one I mentioned. Shortest path quat interpolation makes quats behave much, much more like people expect rotation to interpolate. The only thing that they get confused by are 180 degree+ rotations, and compared to the issues with normalized components, that’s easy to understand, easy to explain.
Yes, there are things that moving from normalized-component breaks. Like, a sine f-curve modifier on a quat component. But those are things that never made any sense to begin with. Part of what’s exciting to me is that actually treating quats as quats allows Blender developers to start doing things like quat f-curve modifiers that make sense. Like an additive sine on a slerp fac, rather than on components. The doors that close were always pretty crappy, and it allows new doors to open.
Sorry. Hopefully the rest of this discussion is okay, and you’ll let us know if not. I never created a bug report related to that issue, because AFAICT, it was working according to dev intent.
Oh yeah, for sure, the primary reason for using direct component interpolation in game development is performance. That’s actually also relevant for production animation (e.g. keeping a responsive viewport when animating with a large number of characters, etc.). But I agree it’s not nearly as critical, and is not the primary reason I’m advocating for keeping it around in Blender.
In general, I feel like you think we have a disagreement, whereas I’m pretty sure we don’t. At least about the facts of quaternion rotations and interpolation methods thereof, and also about the usefulness of having other ways to handle and work with quaternions in Blender.
Again, the only case I’m making here is that the current approach is also good, and should be kept.