Animation-level constraints

(NOTE: these ideas assume an animation data model more-or-less like what came out of the 2023 Animation Workshop. Specifically, they assume an animation data block that can contain animation for more that one object at a time, and that has layers with key frame strips on them. The ideas below are not short-term ideas for the current animation system.)

Introduction / Context

Animators often use constraints to help define an animation. A simple example is a character picking up a cup and putting it down elsewhere.

Currently these constraints are a static part of the scene description, living on the constrained bones/objects themselves. Although this works well for constraints that define e.g. a fundamental part of how a rig works, it works less well for the constraints that animators create as part of defining an animation. For example:

  1. Standard (object-level) constraints live for the entire animation, even when disabled for part of it. This means you get dependency cycles if in one part of the animation you want the cup to follow the character’s hand, but in another part have the hand follow the cup. (There are ways to work around this in many cases, but those workarounds add yet more complexity which makes it easier to get confused and mess something up on accident.)
  2. Standard (object-level) constraints don’t come along with the animation data, even when semantically they’re part of the animation. For example, if rigs and props are separately linked into the lighting file, and the animation is re-hooked-up to them, all relevant constraints and helper objects still have to be manually reconstructed.
  3. Standard (object-level) constraints have no insight into the animation, and therefore can’t do anything smart like ensuring that world-space positions don’t jump/shift when they’re enabled/disabled. So animators are forced to manage such things manually, which is annoying in simple cases and painstaking and error-prone in complex cases.

It’s important to note that none of these issues are blockers. Clearly, animators and animation productions have been managing as-is. But it does impact workflow efficiency and the joy of animating in interaction-heavy shots.

So it would be valuable to have animation-level constraints, which are stored as part of the animation data and which are specifically designed for the animation use case.

Below are some ideas and inspiration for how animation-level constraints could work.

General Notes

Although the ideas further below have important differences, there is one thing that they all have in common: animation level constraints live on animation layers in some way, whether that be because they are strips themselves or because they are integrated into key frame animation data. This has some important benefits:

  • Evaluation order of animation-level constraints is explicit, determined by what animation layer they’re on. This makes dependency cycles impossible and allows arbitrary setups.
  • The output of constraints is simply animation data. So e.g. the layers above a constraint simply see animation data that moves in the way specified by the constraint. This allows constraints to be interleaved with animation evaluation, and also makes the semantics of baking individual layers obvious.

Additionally, all of these ideas attempt to leave the positions of a constrained object at the start/end times of the constraint unaffected by that constraint. This is important for locality of effect, so that animators can use animation-level constraints to specify the animation while the constraint is active without accidentally impacting the animation that comes before or after.

Constraints as Interpolators

One possibility is to make animation-level constraints simply a specification of how keys should be interpolated. All keys would still be specified in whatever space the object is actually in (e.g. world space). For the picking-up-a-cup example, that means both the hand and cup would be keyed in world space, and it would be up to the animator (with the help of tooling) to ensure those positions match. Then the “constraint” would simply be a way to make the cup’s key frame interpolation behave as if it were the child of the hand.

However, it’s not entirely clear how to achieve this.

It may be possible to do something similar to Tangent Space Optimization of Controls for Character Animation, but adapted/extended to work with multiple interacting objects. But there are some notable limitations with that technique that (as far as I know) haven’t been overcome yet, such as all controls needing to be keyed together on the same frames. It’s also not at all clear if it even can be extended arbitrarily to all the types of constraints we might want to support.

Nevertheless, if we can pull it off it would go a long way to the “separate posing from interpolation” philosophy that came out of the October 2022 animation workshop.

Advantages of this approach:

  • Simple concept.
  • Separation of posing from interpolation, which allows the animator to set their keys in e.g. world space without worrying about parent-child relationships, etc. How objects move in relation to each other can be straightforwardly specified after-the-fact.
  • Since in principle it’s more of a tool-based approach, the data model itself might(?) not need to have any real insight into e.g. what is constrained by what, which keeps animation data itself and animation evaluation simpler.

Disadvantages of this approach:

  • How? It looks to be a hard problem, and it’s not even clear whether it’s feasible for all the types of constraints/relationships we want.
  • So far, the closest thing to this (the above-linked paper) requires everything to be keyed together.

In short, as seductive as the idea is, of the ideas I’ve explored so far it seems the least feasible. But then again, I may be looking at this from the wrong perspective, and perhaps there is a way to make something like this work in a straightforward way.

Constraints as Offsets

Another possibility is for animation-level constraints to function as offsets from a “reference” interpolation. This approach treats constraints as procedurally generated animation that is layered on top of the animator’s animation.

The basic idea is as follows:

If you imagine a straight linear world-space interpolation of the cup from the position where it’s picked up to the position where it’s put down, then the constraint binding it to the hand could be computed as the delta between that linear interpolation and the path the cup would take if following the hand. That offset would then be applied to the actual interpolated positions of the cup (e.g. as a strip on an additive layer above the main animation layer).

If the animator wants the cup to exactly follow with the hand, they just make sure the cup is keyed at the start/end of the constraint, and set the interpolation between those keys to linear, to match the reference interpolation used to compute the offset. But they can also choose to deviate to achieve additional motion.

This approach has some advantages:

  • It lets the animation itself be specified in whatever space the animator prefers, independent of constraints.
  • The math is super simple (just computing a delta), and is therefore easy to implement for essentially any type of constraint.
  • It doesn’t require the constraint start/end times to be aligned with key frames.
  • The animator can still animate the constrained object during the constrained interval, which then simply becomes an offset from the constrained position.

However, it also has some serious disadvantages:

  • Although in theory it’s straightforward to use, it practice it may be difficult to explain/make intuitive.
  • It only works for constraints with both a start and end time. Half-open and infinite-time constraints are ill defined.
  • Although the start and end times of the constraint can be adjusted without affecting the unconstrained portion of the animation, it would mess up the animation within the constrained time in potentially unpredictable and counter-intuitive ways.
  • The animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which may complicate implementation and make animation data more “rigid” in terms of what it animates.

Due to the disadvantages, this approach might not be suitable as the built-in solution for animation-level constraints. However, due to its simplicity of implementation, it would probably be straightforward to build as a user via something like procedural animation nodes (depending on the design of animation nodes). So if this approach does turn out to be useful in some situations, users may still be able access it.

Constraints as Properties of Strips

Yet another possibility is for constraints to be a property of key frame animation strips. In this approach, constraints determine how the animation data in a strip is interpreted for the entire duration of a strip. Rather than the user specifying that a constraint exists over a certain range of time, they instead create the constraint within a strip, and the constraint is active within that strip for its entire span.

On its own, this would have the disadvantage of not aligning object positions/poses at the start and end of the constrained time interval. However, these strips could also optionally (at the user’s discretion!) have auto-computed keys at the start and end to match the animation of the layer below (or a preceding abutting strip on the same layer) in world space.

For the example of picking up a cup, the animator might have the main animation on the first layer, and then for the section where the cup is constrained to the hand they would put a strip on the second layer that animates only the cup, in the space of the hand.

An alternative and more general interpretation of this approach is that the user can specify the parameter space of the animation data on a per-strip basis. This more general interpretation expands the feature beyond just constraints, and also suggests a powerful solution for switchable control rigs, among other things.

This approach has a lot of advantages:

  • It can handle any type of constraint. The only limitation is that if the constraint has no reasonable way to reverse-solve, then the auto-matching start/end keys feature wouldn’t be available for that particular constraint.
  • More than one constraint can be added inside a strip, so complex setups don’t require e.g. a huge stack of constraint strips on successive animation layers.
  • It’s a unified system that has the potential to simultaneously address some other thing we want. Namely, control rig switching and animation retargeting.
  • It provides a straightforward conceptual framework for baking animation to different parameter spaces: specify a different parameter space (e.g. set of constraints) for a strip, and tell Blender to convert the strip to the new space. Or to bake to a new strip with that new space.
  • Adjusting the start/end times of a constrained strip neither messes up the animation on either side of it nor the animation within it, aside from (non-destructive) truncation.
  • The animation both inside and outside a constrained strip can be defined in their natural parameter space.

There are also some disadvantages:

  • As a user, adjusting your animation in the vicinity of the transition to/from a constrained strip is a little more challenging, because the animation data on either side of the transition is in a different strip and parameter space. Tooling can likely help with this, but it’s nevertheless a trade off. And it exacerbates the challenge of other questions we have, like “where do the keys go”.
  • Like the offset-based approach, the animation data model would need to have insight into what constrains what, and would need to use that knowledge during animation evaluation, which complicates things and makes animation data more “rigid” in terms of what it’s hooked up to.
  • Compared to e.g. constraints as offsets where (due to its simplicity) most of the trade offs are (probably?) pretty obvious, the sophistication and complexity of this approach makes it a lot more likely that this list of disadvantages is significantly incomplete. It also makes it a tall order to implement if we can’t find ways to keep it simple at the technical level.

I have a hard time evaluating this proposal. It’s talking in quite abstract terms with just one practical example about a hand and a cup. It’s difficult to tell if any of the 3 approaches works for other types of constraints we would like to support.

Perhaps IK, FK and switching between them should be defined as part of the armature in a way that Blender understands, instead of as an opaque constraint. Sometimes the channels are IK, sometimes FK, and then you also have channels that animate the transition between IK and FK parameter spaces for different parts of the armature.

If editing tools, keyframing, constraints and interpolation can understand this they might be more user friendly. I’m a bit concerned that if we make a too generic system, with contraints or control rigs coming with their own parameter spaces, it might be overcomplicating things.

As far as I know some animation apps do fine with just one set of IK and FK channels. Similarly switchable rigs are interesting on paper but it’s also not clear to me that these have been used in production much. Maybe there are good reasons, but I would like see practical examples of that. I would guess that if we had good native IK/FK switching in Blender this hand and cup example might be straightforward.


@brecht I agree with the point about with like RigNodes/Control rig having it’s own “constraint” like system that this could get overcomplex.
The idea of a constraint thought being in a layer or in a layer and then time boxed by a Strip where it can live standalone from the things that it is interacting with (cup/hand) is powerful and I and others have used features like that for decades in large scale projects.

“Similarly switchable rigs are interesting on paper but it’s also not clear to me that these have been used in production much.”

This is valid concern, the first glance is that this is somehow a new thing but it is again old tech that has been production proven on the biggest projects and films and games for decades in one form or another of this, but it has finally gotten to the point of not being a specialized software or role (motion capture editing) and made it to main stream thinking now that Unreal has Control Rig.

Even in Blender, I have in the Animation Module and during the workshop provided many working example addons that let us have a flexible rig approach that allows animators control over how they need their rig to work per shot, per pose even.

I think though that trying to make it over general in the " An alternative and more general interpretation of this approach is that the user can specify the parameter space of the animation data on a per-strip basis."
Is making it feel like that should be features of the other systems not trying to make an MEGA constraint unless it was a procedural node constraint then we come back to what RigNodes are and I have a hard time separate the ideas then and maybe that is what you are reacting to as well?

1 Like

I don’t question that animation level constraints and switchable rigs can be useful. But it’s important how exactly they work and which simplifying assumptions you make. And to validate the design against a set of use cases.

Is there a list written down somewhere with:

  • Use cases that we want to support
  • Links to information about production systems that do something similar

I remember some things but certainly also forgot others.

My understanding is that this parameter space per strip is the core idea of the proposal, so that’s what I was reacting to.

But it’s not concrete enough for me to evaluate it. For example some questions spring to mind:

  • Would IK/FK switching in general be handled as animation level constraints?
  • Would that be part of a control rig that does many other things, or be somehow separate and under more animator control?
  • In the example of the cup and hand, does it make more sense to think of this as two constraints? One to enable IK on the hand, and another to constraint the cup and hand together?
  • How is the parameter space of a strip defined, does it come from the constraint somehow?
  • Does the parameter space of a strip somehow affect what is editable in the 3D viewport, or what gets (auto)keyframed? To make it clear for the users what can actually be edited and will have an effect? Can the channels of multiple strips be edited together or not?
  • If there can be multiple constraints on a strip, how do you get a single parameter space from that? Is this for cases where each constraint controls a different part of the body, or also for multiple competing/layered constraints on the same part of the body?
  • How do the concepts of space switching / dynamic re-rooting and per strip parameter spaces relate?

Here is the link to the GitHub with my thoughts on animation. I am doing up the research and examples in my spare time, so please take that into consideration, I am not being paid for this. It would be nice to research and explore these ideas further full-time 5 days a week (For now that is not a reality)

@brecht would love to hear your thoughts as you seem to always be the person who needs to be impressed for any changes to be made.


I have a hard time evaluating this proposal. It’s talking in quite abstract terms with just one practical example about a hand and a cup. It’s difficult to tell if any of the 3 approaches works for other types of constraints we would like to support.
But it’s important how exactly they work and which simplifying assumptions you make. And to validate the design against a set of use cases.

Those are excellent points. More examples need be part of introduction/context, and the different approaches should explore how they would work in the context of those examples. And just generally I agree that the motivations/goals should be outlined more explicitly and clearly.

I’ll work on this.

Over the last couple of weeks, I’ve also been spending some time in the “breaking your own designs” head space, and I have additional things I want to write up about that, regarding whether any of this is worth it in terms of the technical and UX complexity it might incur, and whether a simpler tools-based approach might be better.

But the approaches themselves still need to be fleshed out better for proper discussion, as you point out.

Thanks Brecht!

@AdamEarle, thanks for the link. There are some interesting references in there that I hadn’t seen. It’s not clear to me if you have a design proposal specifically relating to the design discussed here?

When talking about switchable rigs, we need to be more specific. I don’t have a great understanding of these concepts and hope the animation team can bring clarity here. From what I can tell:

  • We can think of an a rig as having 3 parts. There is the articulation rig which is the minimal yet complete set of bones needed to define a character pose. Then on one side there is a deformation rig that applies that to a particular mesh (not relevant to the discussion here.) And on the other side there is the control rig that gives higher level controls to the animator.
  • In more traditional rigging, there is a fixed control rig and animators will keyframe the control rig (or the articulation rig when it has not separate control).
  • Ephemeral rigging is the idea of a rig as a tool. The control rig is not persistent and not keyframed. It is the articulation rig that is keyframed. Keyframes typically need to be more dense, because if the articulation rig by itself (for example) does not have IK, hitting the IK target can not be done with sparse bone rotation keyframes.
  • Tradionally a rig only has forward solving, from controls to articulation. If a control rig supports reverse solving from articulation to control, it can be used to edit animation from different sources, for example created by different rigs, motion capture or physics solvers.
  • Full Body IK is a solver that solves IK for the entire body at once, where the parenting hierarchy, definition of IK targets, rotation limits, … is solved together as a black box. This is not strictly required for ephemeral rigs as a concept, but having this available as base functionality significantly simplifies building them.
  • IK/FK Switching is about switching some part of the body to be controlled by either IK or FK. The controls/channels to be edited by the user differ depending on the mode. There can also be channels that control smooth transitions between the modes.
  • Space switching is the idea that you can change bones in the control rig to a different space. Possible spaces for a bone can be its parent space, world space, or parented to another bone or object in the scene.
    • One way of using this is to change the space of an IK position control, a hand can for example be placed in the space of a hip bone, and stay in place as the rest of the body is animated.
    • What is unclear to me if this concept includes dynamically changing the root bone (and the space of FK rotations channels along with it). Also unclear is if and how FK rotation control space switching are used in practice. Maybe a “look at” constrained enforced by an IK solver?
  • Space switching and IK/FK switching are orthogonal concepts. That is space switching changes the space of a control, but it does not switch to a different control. Both have the challenge of ensuring continuity when switching in an animation. Solutions might involve keyframing both controls or spaces automatically, using some kind of compensating keyframes or baking down changes to a common articulation rig.

Unreal takes these ideas which have typically been done as add-ons and makes them a core part of the design. Things are less ephemeral, for example space switching involves space keyframes that are put on a timeline. Though from the docs it seems like you might need to bake them down to the articulation rig if you want to switch to different spaces or control rigs.

Thinking of the hand and cup example in terms of how it would be done in Unreal, I think constraining a hand to a cup would not be something that is executed by itself? But rather involves setting up space switching and IK for the control rig to handle it.

Some great content here @brect I will add this information in and make amendments. The idea of GitHub is not so much a proposal, but to try and get people talking and thinking about the future of Blender’s animation process.

The best explanation of ephemeral rigging I’ve heard.

Without being a good rigger and barely understanding too much of what you mention; however, I would like to contribute a set of ideas about the constraints, which come to my mind due to using the FreeCad Sketcher. Although the sketches are 2D I think some concepts could be transferred to 3D by adding one more coordinate and the roll. Sorry if I go off topic.**

I upload a short video and comment on what you see:

  1. I create a pair of lines that behave like bones in EditMode, and join them at one end with a ConstrainCoincident.
  2. I add a ContrainAngle to them and check the relationship between the two lines and also modify the value of the constraint.
  3. I add a new line and selecting one of the previous ones, add a ConstrainParallel and check the behavior between lines. I remove the Constraint and do the same with a ConstrainEqual (distance) and a ConstrainPerpendicular.
  4. I convert the ConstrainAngle to Reference, which internally modifies a parameter of the constraint, and only informs me of the existing angle.
  5. I show the list of constraints and the list of lines.
  6. I show a partial rig of a skeleton, where some bones have their distance and angle restricted, and I play with them a bit to see how they interact.

Lines are not 3D bones, and the purpose of a Sketch is to completely constrain a drawing and not animate it, but anyway I leave a list of concepts, which I think could be applied if skeleton animation were based exclusively on constraint animation.

  • Constraints as objects or elements independent of the bones, which link or relate them.
  • Visual representation and selection from the 3D viewer of constraints, with the possibility of filtering.
  • Add the constraints as a reference in EditMode, and make the constraint effective in PoseMode, if requested.
  • Selection of the bones associated with a constraint and vice versa, selection of the constraints associated with a bone.
  • Constraints for joints.
  • Tools to animate the contraints and not the bones.
  • Tools and keyboard shortcuts to manipulate block or individually multiple selections of constraints.
  • The color of the constraint can show whether it is active, inactive, or its influence is partial.
  • Show Wire Armature while editing Constraints.

Other options:

  • Constraints as forces acting together. If the end of the bone is called to two different places it will be located in the middle, or tilted to a position if this is more influential.
  • Soft constraints with tolerance and damping margins.
  • Different priorities and methods of mixing the influence of constraints (mix, add, over, etc.)
  • Punctual restrictions per frame, or permanent restrictions.

To solve:

  • Is PoseMode necessary, or can constraints be accessed from ObjectMode, even if they are part of an Armature that acts as a container parent?
  • In the Sketcher module, matching, contradictory or unacceptable constraints give an error and the Sketch is blocked, until the conflicting constraints are removed, what should be done in these cases.
  • Is the father-son relationship necessary? Can it be avoided, or must it coexist? If it was, should it be a constraint?

As I said at the beginning, I have no ability to know the implications, incompatibilities or real benefits of all that I mentioned, it simply seemed to me a different approach from the traditional one of father and son, which could have certain advantages and reduce the relationship between the objects to the minimum expression, which perhaps takes more advantage of the nodes.

Thank you for taking the time to read this.


Woaah nice input got anything else?

General Notes

Dependency cycles seem like they’d still be a thing for any rigging system that allows arbitrary rigs. Maybe you meant it reduces the chances of a cycle because constraints are only enabled on a per-layer or strip level?

Constraints as Interpolators

Isn’t this just a normal [Cup ChildOf(Hand)] constraint setup? Am I missing something? What is meant by “Constraints as Interpolators”…? Do constraints not affect keyframes? That seems unnecessarily limiting and worse than our current system.

I understand the importance of separating posing from interpolation (that an animator may prefer, on a whim, to pose in FK/IK and even go back and forth when working on a single pose. Afterwards, they may change the rig to one that interpolates better. Reverse solving allows this workflow because it preserves poses.) But, I don’t see how constraints magically allow you to separate the two. Hmm… Perhaps, if I assume that constraints don’t affect keyframes, then enabling a rig does not break those keyposes, just the way they interpolate. So you add a constraint to ensure proper interpolation? What if I wanted to modify keyposes with the new rig? What if I baked to every frame? Then we’re back to what we currently have anyways. Even if i didn’t, it seems like rigs, poses, and interpolation all go hand in hand. The rig defines proper interpolation. And rigs cannot smoothly interpolate unless the end poses match anyways.

As to how (for reverse solving)? I’ve already implemented my own, so has Unreal, RigOnTheFly, Richard Lico, and iirc Sybren’s rigging nodes will support it. It just comes down to inverting constraints, the same way NLA remaps keys. You’ll need more info than a constraint can provide, but that’s the core idea.

Constraints as Offsets

This doesn’t make sense to me. Let me know if I’m wrong: the moment you move the hand, that would break the relation- the cup no longer follows the hand. This seems to me the equivalent of no constraints and having to tediously key a cup to follow a hand. More confusing even, that you have to match some arbitrary reference interpolation to do so. I don’t see the advantage.

Or perhaps, you mean that the cup would still follow the hand, w/ the newly added hand movement changes… But then, that’s what we already have with a normal hierarchy or ChildOf constraint. What’s changed?

Why even imagine a cup moving in world-space linearly while it’s detached from the hand? That seems unnecessary. Why not: “I want to pickup the cup. So I place the hand around the cup, enable the Cup-Hand rig, then move the hand. Done”? Why is it: “I want to pickup the cup. So I look for the cup’s reference interpolation and match it. Then enable the Cup-Hand rig. Then move the hand”. Why the extra step?

Assuming this is the goal of this method, we can already do this, though it does depend on the rig (and it should). Really, I don’t see how any animation in any system can be stored independently of any constraint. Animation affects controls affects constrained deformers. To not store controller animation is to … be stuck in TPose.

Constraints as Properties of Strips

Are these not the same thing, just different in where that range is specified? Sorry if I’m being pedantic.

As I understand it, this method is about: constraints defined per strip where constraints behave as they already do, however they are evaluated per strip. Animation layer blending uses the constraint-evaluated channels to blend the underlying (generally) FK rig. I like it. It’s like an implicit bake.

A small problem : what if you wanted to apply animation layering effects to the controls/constraints? That means the constraints must live outside of a strip. A potential solution is that strips only dis/enable rigs and blending just plain works on fcurves. The rig would have to live at some higher level.

Additionally, you may want to blend in IK space, then later in FK space. Currently, it seems like you’d always blend in FK space (imagine blending a walk to a run or crouch. You’d probably want to blend the feet in IK space) . A potential solution is to require the user to mark when an fcurve/property-channel should be blended pre-constraints or post-constraints (if the fcurve exists at all).

Seems like a non-issue ( assuming a rig exists at a higher level than strips)? The rig determines the space to blend in, so ensure both strips use the same rig. Alternatively, make a new strip above the transition (or specifically as part of the transition) that specifies the rig to blend with.

You call it “baking animation to a different parameter space”, but that’s just fancy talk “reverse solving support”, or the ability to enable a rig while preserving existing animation. You don’t need your method to do so. Grouping constraints (a rig) into a strip doesn’t suddenly make it easier or more straight forward either (edit: I misremembered. Rigs are necessary to specify which bones to preserve when baking because all bones are generally not preserved) Conventionally, everyone sees visual keying in one direction (visual key IK down to FK) but it’s simpler to think of it the more general way. Visual keying, in general, is just “change rig while preserving animation/poses”. That means you can visual key to IK (FK to IK switch). With this mindset, the implementation to visual key is the same in either directions, you must always account for the existing constraints of the armature that’ll exist post-bake. What existed before doesn’t matter besides caching the pose to preserve. Again, I’ve mentioned above about the people who are already working on reverse-solvable rig implementation.

That’s a problem I’ve been trying to figure out for a while. The simplest answer I’ve found is to do as Richard Lico does: bake everything down to FK when you’re done. When going back to adjust animation, it’s just mentally easier to take a few minutes to re-setup a rig (assuming an automated rigging system: RigOnTheFly) than it is to remember how the heck you setup the rig yesterday … or even an hour ago. Having to maintain rigs across multiple actions, nla strips, and time intervals, … blends, is a pain. The former synergizes with flow (you actively choose how you want things to move). The latter goes against it (you have to take a moment to remember, inspect, and re-figure out how things are rigged again).

1 Like

(I separated my posts since the first is too long and only replies to OP)

These are not orthogonal. Both are “change rig, preserve animation/deformation-poses”. The specific bone being controlled isn’t really relevant.

Space switching doesn’t require re-rooting a bone hierarchy. You can do the same thing with CopyTransforms to a new bone that’s a childof the new space.

I’ve changed FK rotation spaces to reverse a chain hierarchy, to change a bone to rotate with swing and twist independently since it generally interpolates a spine in a nicer way, to aim-space (head lookat), to disable hierarchy rotation effects, etc. An example use for reversing a chain hierarchy is a hand stand, where you want the hips to pivot from the torso which pivots around the grounded hands. Similar for hanging from monkey bars.

From what I understand, a parameter space just refers to an active rig. An IK rig has its fcurves in IK space (IK parameter space). I don’t like the wording “parameter space”… It’s just active animated rigs that affect the character. The wording only really makes sense for literal spaces: The IK hand is in hips space, or door knob space, or the cup is in hand space. It’s what we already have in vanilla Blender. A ChildOf constraint changes its effective but not literal fcurve space. A setup which doesn’t re-using the original bone’s fcurves (so you don’t destroy existing animation or interpolation) is: original bone CopyXforms( tmp_fcurve_buffer_bone ), and tmp_…bone is hierarchy_childof the new space. The buffer bone is controlled by the animator, who treats the buffer bone as if it was the original bone- thus you get the phrase “original_bone is now in new_space” and we’ve “changed the parameter space”. Let me know if I’m wrong.

1 Like

By orthogonal I meant that you can do them independently of each other, not if they are similar in some way (which I agree they are). For example X and Y axes are orthogonal but are both spatial dimensions.

So to be more specific, my understanding is that for a bone you could only IK/FK switch, only space switch, or do both. They are distinct features.

In mathematics the term space can be much more abstract, I’m just trying to clarify that the “space” in “space switching” here refers specifically to the 3D coordinate space for bones and objects, and nothing else.

I agree, I was actually asking the opposite. That is, if you have a space switching feature in the software, would dynamic re-rooting just be a particular application of that, and not a distinct feature?

From this example I understand the answer is that dynamic re-rooting is not a distinct feature.

I understand it’s meant to be something along those lines, but it’s not specific enough. And the per strip aspect is important too.

For example a more specific answer to my question could be:

“Per strip parameter space” means “space switching state stored in a strip”. The chosen spaces would be fixed for the duration of the strip. This state controls the meaning of values in the animation channels for that strip only. To smoothly switch to another space in animation would involve blending with another strip with different space switching state, or applying the space switching and continuing to edit.

IK/FK switching is not part of the concept of “per strip parameter space” and not a native feature understood by the animation strips, but is instead controlled by animation channels that affect constraints / rig nodes.

I’m not saying this is the correct answer, just the type of answer that would be specific enough to start understanding this proposal.

1 Like

Dependency cycles seem like they’d still be a thing for any rigging system that allows arbitrary rigs. Maybe you meant it reduces the chances of a cycle because constraints are only enabled on a per-layer or strip level?

I did indeed mean that dependencies cycles would be impossible in the described systems. I could certainly be missing something, of course. But in all the systems I described, the user explicitly specifies evaluation order in some way, and the evaluation can only progress forward. So there is always a fixed number of known steps which terminate.

So, for example, you could have object A constrain object B which then constrains object A again (A → B → A). But the evaluation order of those constraints is explicitly specified, so it doesn’t create an evaluation cycle, and the result is well defined.

Having said that, that’s only within the animation-level constraints themselves. Depending on how you have e.g. regular constraints interact with that system, it could still result in a dependency cycle in the larger dependency graph. But within the animation-level constraints themselves there wouldn’t be any cycles, by construction. Which is what I meant.

As for the rest of what you wrote, I think you bring up some important points. But I think there may also be some misunderstanding of the proposal(s). This is my fault, because as Brecht pointed out earlier it’s rather abstract and hard to follow right now. Eventually I’ll come back to this and try to present the ideas more concretely. But it’s not a high priority right now, because we’ve decided to focus first on designing and implementing the things that already have a (relatively) clear way forward.