Animation-level constraints

(NOTE: these ideas assume an animation data model more-or-less like what came out of the 2023 Animation Workshop. Specifically, they assume an animation data block that can contain animation for more that one object at a time, and that has layers with key frame strips on them. The ideas below are not short-term ideas for the current animation system.)

Introduction / Context

Animators often use constraints to help define an animation. A simple example is a character picking up a cup and putting it down elsewhere.

Currently these constraints are a static part of the scene description, living on the constrained bones/objects themselves. Although this works well for constraints that define e.g. a fundamental part of how a rig works, it works less well for the constraints that animators create as part of defining an animation. For example:

  1. Standard (object-level) constraints live for the entire animation, even when disabled for part of it. This means you get dependency cycles if in one part of the animation you want the cup to follow the character’s hand, but in another part have the hand follow the cup. (There are ways to work around this in many cases, but those workarounds add yet more complexity which makes it easier to get confused and mess something up on accident.)
  2. Standard (object-level) constraints don’t come along with the animation data, even when semantically they’re part of the animation. For example, if rigs and props are separately linked into the lighting file, and the animation is re-hooked-up to them, all relevant constraints and helper objects still have to be manually reconstructed.
  3. Standard (object-level) constraints have no insight into the animation, and therefore can’t do anything smart like ensuring that world-space positions don’t jump/shift when they’re enabled/disabled. So animators are forced to manage such things manually, which is annoying in simple cases and painstaking and error-prone in complex cases.

It’s important to note that none of these issues are blockers. Clearly, animators and animation productions have been managing as-is. But it does impact workflow efficiency and the joy of animating in interaction-heavy shots.

So it would be valuable to have animation-level constraints, which are stored as part of the animation data and which are specifically designed for the animation use case.

Below are some ideas and inspiration for how animation-level constraints could work.

General Notes

Although the ideas further below have important differences, there is one thing that they all have in common: animation level constraints live on animation layers in some way, whether that be because they are strips themselves or because they are integrated into key frame animation data. This has some important benefits:

  • Evaluation order of animation-level constraints is explicit, determined by what animation layer they’re on. This makes dependency cycles impossible and allows arbitrary setups.
  • The output of constraints is simply animation data. So e.g. the layers above a constraint simply see animation data that moves in the way specified by the constraint. This allows constraints to be interleaved with animation evaluation, and also makes the semantics of baking individual layers obvious.

Additionally, all of these ideas attempt to leave the positions of a constrained object at the start/end times of the constraint unaffected by that constraint. This is important for locality of effect, so that animators can use animation-level constraints to specify the animation while the constraint is active without accidentally impacting the animation that comes before or after.

Constraints as Interpolators

One possibility is to make animation-level constraints simply a specification of how keys should be interpolated. All keys would still be specified in whatever space the object is actually in (e.g. world space). For the picking-up-a-cup example, that means both the hand and cup would be keyed in world space, and it would be up to the animator (with the help of tooling) to ensure those positions match. Then the “constraint” would simply be a way to make the cup’s key frame interpolation behave as if it were the child of the hand.

However, it’s not entirely clear how to achieve this.

It may be possible to do something similar to Tangent Space Optimization of Controls for Character Animation, but adapted/extended to work with multiple interacting objects. But there are some notable limitations with that technique that (as far as I know) haven’t been overcome yet, such as all controls needing to be keyed together on the same frames. It’s also not at all clear if it even can be extended arbitrarily to all the types of constraints we might want to support.

Nevertheless, if we can pull it off it would go a long way to the “separate posing from interpolation” philosophy that came out of the October 2022 animation workshop.

Advantages of this approach:

  • Simple concept.
  • Separation of posing from interpolation, which allows the animator to set their keys in e.g. world space without worrying about parent-child relationships, etc. How objects move in relation to each other can be straightforwardly specified after-the-fact.
  • Since in principle it’s more of a tool-based approach, the data model itself might(?) not need to have any real insight into e.g. what is constrained by what, which keeps animation data itself and animation evaluation simpler.

Disadvantages of this approach:

  • How? It looks to be a hard problem, and it’s not even clear whether it’s feasible for all the types of constraints/relationships we want.
  • So far, the closest thing to this (the above-linked paper) requires everything to be keyed together.

In short, as seductive as the idea is, of the ideas I’ve explored so far it seems the least feasible. But then again, I may be looking at this from the wrong perspective, and perhaps there is a way to make something like this work in a straightforward way.

Constraints as Offsets

Another possibility is for animation-level constraints to function as offsets from a “reference” interpolation. This approach treats constraints as procedurally generated animation that is layered on top of the animator’s animation.

The basic idea is as follows:

If you imagine a straight linear world-space interpolation of the cup from the position where it’s picked up to the position where it’s put down, then the constraint binding it to the hand could be computed as the delta between that linear interpolation and the path the cup would take if following the hand. That offset would then be applied to the actual interpolated positions of the cup (e.g. as a strip on an additive layer above the main animation layer).

If the animator wants the cup to exactly follow with the hand, they just make sure the cup is keyed at the start/end of the constraint, and set the interpolation between those keys to linear, to match the reference interpolation used to compute the offset. But they can also choose to deviate to achieve additional motion.

This approach has some advantages:

  • It lets the animation itself be specified in whatever space the animator prefers, independent of constraints.
  • The math is super simple (just computing a delta), and is therefore easy to implement for essentially any type of constraint.
  • It doesn’t require the constraint start/end times to be aligned with key frames.
  • The animator can still animate the constrained object during the constrained interval, which then simply becomes an offset from the constrained position.

However, it also has some serious disadvantages:

  • Although in theory it’s straightforward to use, it practice it may be difficult to explain/make intuitive.
  • It only works for constraints with both a start and end time. Half-open and infinite-time constraints are ill defined.
  • Although the start and end times of the constraint can be adjusted without affecting the unconstrained portion of the animation, it would mess up the animation within the constrained time in potentially unpredictable and counter-intuitive ways.
  • The animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which may complicate implementation and make animation data more “rigid” in terms of what it animates.

Due to the disadvantages, this approach might not be suitable as the built-in solution for animation-level constraints. However, due to its simplicity of implementation, it would probably be straightforward to build as a user via something like procedural animation nodes (depending on the design of animation nodes). So if this approach does turn out to be useful in some situations, users may still be able access it.

Constraints as Properties of Strips

Yet another possibility is for constraints to be a property of key frame animation strips. In this approach, constraints determine how the animation data in a strip is interpreted for the entire duration of a strip. Rather than the user specifying that a constraint exists over a certain range of time, they instead create the constraint within a strip, and the constraint is active within that strip for its entire span.

On its own, this would have the disadvantage of not aligning object positions/poses at the start and end of the constrained time interval. However, these strips could also optionally (at the user’s discretion!) have auto-computed keys at the start and end to match the animation of the layer below (or a preceding abutting strip on the same layer) in world space.

For the example of picking up a cup, the animator might have the main animation on the first layer, and then for the section where the cup is constrained to the hand they would put a strip on the second layer that animates only the cup, in the space of the hand.

An alternative and more general interpretation of this approach is that the user can specify the parameter space of the animation data on a per-strip basis. This more general interpretation expands the feature beyond just constraints, and also suggests a powerful solution for switchable control rigs, among other things.

This approach has a lot of advantages:

  • It can handle any type of constraint. The only limitation is that if the constraint has no reasonable way to reverse-solve, then the auto-matching start/end keys feature wouldn’t be available for that particular constraint.
  • More than one constraint can be added inside a strip, so complex setups don’t require e.g. a huge stack of constraint strips on successive animation layers.
  • It’s a unified system that has the potential to simultaneously address some other thing we want. Namely, control rig switching and animation retargeting.
  • It provides a straightforward conceptual framework for baking animation to different parameter spaces: specify a different parameter space (e.g. set of constraints) for a strip, and tell Blender to convert the strip to the new space. Or to bake to a new strip with that new space.
  • Adjusting the start/end times of a constrained strip neither messes up the animation on either side of it nor the animation within it, aside from (non-destructive) truncation.
  • The animation both inside and outside a constrained strip can be defined in their natural parameter space.

There are also some disadvantages:

  • As a user, adjusting your animation in the vicinity of the transition to/from a constrained strip is a little more challenging, because the animation data on either side of the transition is in a different strip and parameter space. Tooling can likely help with this, but it’s nevertheless a trade off. And it exacerbates the challenge of other questions we have, like “where do the keys go”.
  • Like the offset-based approach, the animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which complicates things and makes animation data more “rigid” in terms of what it’s hooked up to.
  • Compared to e.g. constraints as offsets where (due to its simplicity) most of the trade offs are (probably?) pretty obvious, the sophistication and complexity of this approach makes it a lot more likely that this list of disadvantages is significantly incomplete. It also makes it a tall order to implement if we can’t find ways to keep it simple at the technical level.
14 Likes

I have a hard time evaluating this proposal. It’s talking in quite abstract terms with just one practical example about a hand and a cup. It’s difficult to tell if any of the 3 approaches works for other types of constraints we would like to support.

Perhaps IK, FK and switching between them should be defined as part of the armature in a way that Blender understands, instead of as an opaque constraint. Sometimes the channels are IK, sometimes FK, and then you also have channels that animate the transition between IK and FK parameter spaces for different parts of the armature.

If editing tools, keyframing, constraints and interpolation can understand this they might be more user friendly. I’m a bit concerned that if we make a too generic system, with contraints or control rigs coming with their own parameter spaces, it might be overcomplicating things.

As far as I know some animation apps do fine with just one set of IK and FK channels. Similarly switchable rigs are interesting on paper but it’s also not clear to me that these have been used in production much. Maybe there are good reasons, but I would like see practical examples of that. I would guess that if we had good native IK/FK switching in Blender this hand and cup example might be straightforward.

2 Likes

@brecht I agree with the point about with like RigNodes/Control rig having it’s own “constraint” like system that this could get overcomplex.
The idea of a constraint thought being in a layer or in a layer and then time boxed by a Strip where it can live standalone from the things that it is interacting with (cup/hand) is powerful and I and others have used features like that for decades in large scale projects.

“Similarly switchable rigs are interesting on paper but it’s also not clear to me that these have been used in production much.”

This is valid concern, the first glance is that this is somehow a new thing but it is again old tech that has been production proven on the biggest projects and films and games for decades in one form or another of this, but it has finally gotten to the point of not being a specialized software or role (motion capture editing) and made it to main stream thinking now that Unreal has Control Rig.

Even in Blender, I have in the Animation Module and during the workshop provided many working example addons that let us have a flexible rig approach that allows animators control over how they need their rig to work per shot, per pose even.

I think though that trying to make it over general in the " An alternative and more general interpretation of this approach is that the user can specify the parameter space of the animation data on a per-strip basis."
Is making it feel like that should be features of the other systems not trying to make an MEGA constraint unless it was a procedural node constraint then we come back to what RigNodes are and I have a hard time separate the ideas then and maybe that is what you are reacting to as well?

1 Like

I don’t question that animation level constraints and switchable rigs can be useful. But it’s important how exactly they work and which simplifying assumptions you make. And to validate the design against a set of use cases.

Is there a list written down somewhere with:

  • Use cases that we want to support
  • Links to information about production systems that do something similar

I remember some things but certainly also forgot others.

My understanding is that this parameter space per strip is the core idea of the proposal, so that’s what I was reacting to.

But it’s not concrete enough for me to evaluate it. For example some questions spring to mind:

  • Would IK/FK switching in general be handled as animation level constraints?
  • Would that be part of a control rig that does many other things, or be somehow separate and under more animator control?
  • In the example of the cup and hand, does it make more sense to think of this as two constraints? One to enable IK on the hand, and another to constraint the cup and hand together?
  • How is the parameter space of a strip defined, does it come from the constraint somehow?
  • Does the parameter space of a strip somehow affect what is editable in the 3D viewport, or what gets (auto)keyframed? To make it clear for the users what can actually be edited and will have an effect? Can the channels of multiple strips be edited together or not?
  • If there can be multiple constraints on a strip, how do you get a single parameter space from that? Is this for cases where each constraint controls a different part of the body, or also for multiple competing/layered constraints on the same part of the body?
  • How do the concepts of space switching / dynamic re-rooting and per strip parameter spaces relate?
4 Likes

I always have a really hard time trying to talk about development with developers without developers turning into Premadonas screaming “Heathen! Feature Request! Feature Request!”

Now that said I have and will continue to take notes on what “I” think from my 20 years of experience being a commercial animator doing “Animation”.

Animation meaning not just 3D animation. But all forms of animation.

Disclaimers aside, here is the link to the GitHub with my thoughts on animation. I am doing up the research and examples in my spare time, so please take that into consideration, I am not being paid for this. It would be nice to research and explore these ideas further full-time 5 days a week (For now that is not a reality)

https://github.com/adamearle/Back-To-Animation/wiki

@brecht would love to hear your thoughts as you seem to always be the person who needs to be impressed for any changes to be made.

@brecht

I have a hard time evaluating this proposal. It’s talking in quite abstract terms with just one practical example about a hand and a cup. It’s difficult to tell if any of the 3 approaches works for other types of constraints we would like to support.
…
But it’s important how exactly they work and which simplifying assumptions you make. And to validate the design against a set of use cases.

Those are excellent points. More examples need be part of introduction/context, and the different approaches should explore how they would work in the context of those examples. And just generally I agree that the motivations/goals should be outlined more explicitly and clearly.

I’ll work on this.

Over the last couple of weeks, I’ve also been spending some time in the “breaking your own designs” head space, and I have additional things I want to write up about that, regarding whether any of this is worth it in terms of the technical and UX complexity it might incur, and whether a simpler tools-based approach might be better.

But the approaches themselves still need to be fleshed out better for proper discussion, as you point out.

Thanks Brecht!

@AdamEarle, thanks for the link. There are some interesting references in there that I hadn’t seen. It’s not clear to me if you have a design proposal specifically relating to the design discussed here?

When talking about switchable rigs, we need to be more specific. I don’t have a great understanding of these concepts and hope the animation team can bring clarity here. From what I can tell:

  • We can think of an a rig as having 3 parts. There is the articulation rig which is the minimal yet complete set of bones needed to define a character pose. Then on one side there is a deformation rig that applies that to a particular mesh (not relevant to the discussion here.) And on the other side there is the control rig that gives higher level controls to the animator.
  • In more traditional rigging, there is a fixed control rig and animators will keyframe the control rig (or the articulation rig when it has not separate control).
  • Ephemeral rigging is the idea of a rig as a tool. The control rig is not persistent and not keyframed. It is the articulation rig that is keyframed. Keyframes typically need to be more dense, because if the articulation rig by itself (for example) does not have IK, hitting the IK target can not be done with sparse bone rotation keyframes.
  • Tradionally a rig only has forward solving, from controls to articulation. If a control rig supports reverse solving from articulation to control, it can be used to edit animation from different sources, for example created by different rigs, motion capture or physics solvers.
  • Full Body IK is a solver that solves IK for the entire body at once, where the parenting hierarchy, definition of IK targets, rotation limits, … is solved together as a black box. This is not strictly required for ephemeral rigs as a concept, but having this available as base functionality significantly simplifies building them.
  • IK/FK Switching is about switching some part of the body to be controlled by either IK or FK. The controls/channels to be edited by the user differ depending on the mode. There can also be channels that control smooth transitions between the modes.
  • Space switching is the idea that you can change bones in the control rig to a different space. Possible spaces for a bone can be its parent space, world space, or parented to another bone or object in the scene.
    • One way of using this is to change the space of an IK position control, a hand can for example be placed in the space of a hip bone, and stay in place as the rest of the body is animated.
    • What is unclear to me if this concept includes dynamically changing the root bone (and the space of FK rotations channels along with it). Also unclear is if and how FK rotation control space switching are used in practice. Maybe a “look at” constrained enforced by an IK solver?
  • Space switching and IK/FK switching are orthogonal concepts. That is space switching changes the space of a control, but it does not switch to a different control. Both have the challenge of ensuring continuity when switching in an animation. Solutions might involve keyframing both controls or spaces automatically, using some kind of compensating keyframes or baking down changes to a common articulation rig.

Unreal takes these ideas which have typically been done as add-ons and makes them a core part of the design. Things are less ephemeral, for example space switching involves space keyframes that are put on a timeline. Though from the docs it seems like you might need to bake them down to the articulation rig if you want to switch to different spaces or control rigs.

Thinking of the hand and cup example in terms of how it would be done in Unreal, I think constraining a hand to a cup would not be something that is executed by itself? But rather involves setting up space switching and IK for the control rig to handle it.

Some great content here @brect I will add this information in and make amendments. The idea of GitHub is not so much a proposal, but to try and get people talking and thinking about the future of Blender’s animation process.

The best explanation of ephemeral rigging I’ve heard.

Without being a good rigger and barely understanding too much of what you mention; however, I would like to contribute a set of ideas about the constraints, which come to my mind due to using the FreeCad Sketcher. Although the sketches are 2D I think some concepts could be transferred to 3D by adding one more coordinate and the roll. Sorry if I go off topic.**

I upload a short video and comment on what you see:

  1. I create a pair of lines that behave like bones in EditMode, and join them at one end with a ConstrainCoincident.
  2. I add a ContrainAngle to them and check the relationship between the two lines and also modify the value of the constraint.
  3. I add a new line and selecting one of the previous ones, add a ConstrainParallel and check the behavior between lines. I remove the Constraint and do the same with a ConstrainEqual (distance) and a ConstrainPerpendicular.
  4. I convert the ConstrainAngle to Reference, which internally modifies a parameter of the constraint, and only informs me of the existing angle.
  5. I show the list of constraints and the list of lines.
  6. I show a partial rig of a skeleton, where some bones have their distance and angle restricted, and I play with them a bit to see how they interact.

Lines are not 3D bones, and the purpose of a Sketch is to completely constrain a drawing and not animate it, but anyway I leave a list of concepts, which I think could be applied if skeleton animation were based exclusively on constraint animation.

  • Constraints as objects or elements independent of the bones, which link or relate them.
  • Visual representation and selection from the 3D viewer of constraints, with the possibility of filtering.
  • Add the constraints as a reference in EditMode, and make the constraint effective in PoseMode, if requested.
  • Selection of the bones associated with a constraint and vice versa, selection of the constraints associated with a bone.
  • Constraints for joints.
  • Tools to animate the contraints and not the bones.
  • Tools and keyboard shortcuts to manipulate block or individually multiple selections of constraints.
  • The color of the constraint can show whether it is active, inactive, or its influence is partial.
  • Show Wire Armature while editing Constraints.

Other options:

  • Constraints as forces acting together. If the end of the bone is called to two different places it will be located in the middle, or tilted to a position if this is more influential.
  • Soft constraints with tolerance and damping margins.
  • Different priorities and methods of mixing the influence of constraints (mix, add, over, etc.)
  • Punctual restrictions per frame, or permanent restrictions.

To solve:

  • Is PoseMode necessary, or can constraints be accessed from ObjectMode, even if they are part of an Armature that acts as a container parent?
  • In the Sketcher module, matching, contradictory or unacceptable constraints give an error and the Sketch is blocked, until the conflicting constraints are removed, what should be done in these cases.
  • Is the father-son relationship necessary? Can it be avoided, or must it coexist? If it was, should it be a constraint?

As I said at the beginning, I have no ability to know the implications, incompatibilities or real benefits of all that I mentioned, it simply seemed to me a different approach from the traditional one of father and son, which could have certain advantages and reduce the relationship between the objects to the minimum expression, which perhaps takes more advantage of the nodes.

Thank you for taking the time to read this.

2 Likes

Woaah nice input got anything else?