(NOTE: these ideas assume an animation data model more-or-less like what came out of the 2023 Animation Workshop. Specifically, they assume an animation data block that can contain animation for more that one object at a time, and that has layers with key frame strips on them. The ideas below are not short-term ideas for the current animation system.)
Animators often use constraints to help define an animation. A simple example is a character picking up a cup and putting it down elsewhere.
Currently these constraints are a static part of the scene description, living on the constrained bones/objects themselves. Although this works well for constraints that define e.g. a fundamental part of how a rig works, it works less well for the constraints that animators create as part of defining an animation. For example:
- Standard (object-level) constraints live for the entire animation, even when disabled for part of it. This means you get dependency cycles if in one part of the animation you want the cup to follow the character’s hand, but in another part have the hand follow the cup. (There are ways to work around this in many cases, but those workarounds add yet more complexity which makes it easier to get confused and mess something up on accident.)
- Standard (object-level) constraints don’t come along with the animation data, even when semantically they’re part of the animation. For example, if rigs and props are separately linked into the lighting file, and the animation is re-hooked-up to them, all relevant constraints and helper objects still have to be manually reconstructed.
- Standard (object-level) constraints have no insight into the animation, and therefore can’t do anything smart like ensuring that world-space positions don’t jump/shift when they’re enabled/disabled. So animators are forced to manage such things manually, which is annoying in simple cases and painstaking and error-prone in complex cases.
It’s important to note that none of these issues are blockers. Clearly, animators and animation productions have been managing as-is. But it does impact workflow efficiency and the joy of animating in interaction-heavy shots.
So it would be valuable to have animation-level constraints, which are stored as part of the animation data and which are specifically designed for the animation use case.
Below are some ideas and inspiration for how animation-level constraints could work.
Although the ideas further below have important differences, there is one thing that they all have in common: animation level constraints live on animation layers in some way, whether that be because they are strips themselves or because they are integrated into key frame animation data. This has some important benefits:
- Evaluation order of animation-level constraints is explicit, determined by what animation layer they’re on. This makes dependency cycles impossible and allows arbitrary setups.
- The output of constraints is simply animation data. So e.g. the layers above a constraint simply see animation data that moves in the way specified by the constraint. This allows constraints to be interleaved with animation evaluation, and also makes the semantics of baking individual layers obvious.
Additionally, all of these ideas attempt to leave the positions of a constrained object at the start/end times of the constraint unaffected by that constraint. This is important for locality of effect, so that animators can use animation-level constraints to specify the animation while the constraint is active without accidentally impacting the animation that comes before or after.
One possibility is to make animation-level constraints simply a specification of how keys should be interpolated. All keys would still be specified in whatever space the object is actually in (e.g. world space). For the picking-up-a-cup example, that means both the hand and cup would be keyed in world space, and it would be up to the animator (with the help of tooling) to ensure those positions match. Then the “constraint” would simply be a way to make the cup’s key frame interpolation behave as if it were the child of the hand.
However, it’s not entirely clear how to achieve this.
It may be possible to do something similar to Tangent Space Optimization of Controls for Character Animation, but adapted/extended to work with multiple interacting objects. But there are some notable limitations with that technique that (as far as I know) haven’t been overcome yet, such as all controls needing to be keyed together on the same frames. It’s also not at all clear if it even can be extended arbitrarily to all the types of constraints we might want to support.
Nevertheless, if we can pull it off it would go a long way to the “separate posing from interpolation” philosophy that came out of the October 2022 animation workshop.
Advantages of this approach:
- Simple concept.
- Separation of posing from interpolation, which allows the animator to set their keys in e.g. world space without worrying about parent-child relationships, etc. How objects move in relation to each other can be straightforwardly specified after-the-fact.
- Since in principle it’s more of a tool-based approach, the data model itself might(?) not need to have any real insight into e.g. what is constrained by what, which keeps animation data itself and animation evaluation simpler.
Disadvantages of this approach:
- How? It looks to be a hard problem, and it’s not even clear whether it’s feasible for all the types of constraints/relationships we want.
- So far, the closest thing to this (the above-linked paper) requires everything to be keyed together.
In short, as seductive as the idea is, of the ideas I’ve explored so far it seems the least feasible. But then again, I may be looking at this from the wrong perspective, and perhaps there is a way to make something like this work in a straightforward way.
Another possibility is for animation-level constraints to function as offsets from a “reference” interpolation. This approach treats constraints as procedurally generated animation that is layered on top of the animator’s animation.
The basic idea is as follows:
If you imagine a straight linear world-space interpolation of the cup from the position where it’s picked up to the position where it’s put down, then the constraint binding it to the hand could be computed as the delta between that linear interpolation and the path the cup would take if following the hand. That offset would then be applied to the actual interpolated positions of the cup (e.g. as a strip on an additive layer above the main animation layer).
If the animator wants the cup to exactly follow with the hand, they just make sure the cup is keyed at the start/end of the constraint, and set the interpolation between those keys to linear, to match the reference interpolation used to compute the offset. But they can also choose to deviate to achieve additional motion.
This approach has some advantages:
- It lets the animation itself be specified in whatever space the animator prefers, independent of constraints.
- The math is super simple (just computing a delta), and is therefore easy to implement for essentially any type of constraint.
- It doesn’t require the constraint start/end times to be aligned with key frames.
- The animator can still animate the constrained object during the constrained interval, which then simply becomes an offset from the constrained position.
However, it also has some serious disadvantages:
- Although in theory it’s straightforward to use, it practice it may be difficult to explain/make intuitive.
- It only works for constraints with both a start and end time. Half-open and infinite-time constraints are ill defined.
- Although the start and end times of the constraint can be adjusted without affecting the unconstrained portion of the animation, it would mess up the animation within the constrained time in potentially unpredictable and counter-intuitive ways.
- The animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which may complicate implementation and make animation data more “rigid” in terms of what it animates.
Due to the disadvantages, this approach might not be suitable as the built-in solution for animation-level constraints. However, due to its simplicity of implementation, it would probably be straightforward to build as a user via something like procedural animation nodes (depending on the design of animation nodes). So if this approach does turn out to be useful in some situations, users may still be able access it.
Yet another possibility is for constraints to be a property of key frame animation strips. In this approach, constraints determine how the animation data in a strip is interpreted for the entire duration of a strip. Rather than the user specifying that a constraint exists over a certain range of time, they instead create the constraint within a strip, and the constraint is active within that strip for its entire span.
On its own, this would have the disadvantage of not aligning object positions/poses at the start and end of the constrained time interval. However, these strips could also optionally (at the user’s discretion!) have auto-computed keys at the start and end to match the animation of the layer below (or a preceding abutting strip on the same layer) in world space.
For the example of picking up a cup, the animator might have the main animation on the first layer, and then for the section where the cup is constrained to the hand they would put a strip on the second layer that animates only the cup, in the space of the hand.
An alternative and more general interpretation of this approach is that the user can specify the parameter space of the animation data on a per-strip basis. This more general interpretation expands the feature beyond just constraints, and also suggests a powerful solution for switchable control rigs, among other things.
This approach has a lot of advantages:
- It can handle any type of constraint. The only limitation is that if the constraint has no reasonable way to reverse-solve, then the auto-matching start/end keys feature wouldn’t be available for that particular constraint.
- More than one constraint can be added inside a strip, so complex setups don’t require e.g. a huge stack of constraint strips on successive animation layers.
- It’s a unified system that has the potential to simultaneously address some other thing we want. Namely, control rig switching and animation retargeting.
- It provides a straightforward conceptual framework for baking animation to different parameter spaces: specify a different parameter space (e.g. set of constraints) for a strip, and tell Blender to convert the strip to the new space. Or to bake to a new strip with that new space.
- Adjusting the start/end times of a constrained strip neither messes up the animation on either side of it nor the animation within it, aside from (non-destructive) truncation.
- The animation both inside and outside a constrained strip can be defined in their natural parameter space.
There are also some disadvantages:
- As a user, adjusting your animation in the vicinity of the transition to/from a constrained strip is a little more challenging, because the animation data on either side of the transition is in a different strip and parameter space. Tooling can likely help with this, but it’s nevertheless a trade off. And it exacerbates the challenge of other questions we have, like “where do the keys go”.
- Like the offset-based approach, the animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which complicates things and makes animation data more “rigid” in terms of what it’s hooked up to.
- Compared to e.g. constraints as offsets where (due to its simplicity) most of the trade offs are (probably?) pretty obvious, the sophistication and complexity of this approach makes it a lot more likely that this list of disadvantages is significantly incomplete. It also makes it a tall order to implement if we can’t find ways to keep it simple at the technical level.