Proposal: Moving Blender's rigging to a component based workflow

Since apparently 2020 will be the year for a lot of improvements regarding rigging and animation, I figured I would make this proposal right now to hopefully gather some feedback and discussion early on.

Introduction

Right now most applications, Blender included, let you rig by creating objects, parenting and constraining them. While this gets the job done this requires a lot of surrounding custom tools which often times limit the riggers in a strict workflow.

A lot of studios end up developing or using Autorigs which often offer a component based workflow.

Autorigs offer a few strong points:

  • Consistency between all the rigs in a given production Which in turns offers a few benefits:
    • Easy to share animation between assets
    • Minimize the room for errors while updating a rig (control names will be the same, behavior will be the same, etc.)
  • Much faster workflow since the actual rigging is pretty much abstracted entirely from the rigger.
  • Ability to quickly add new components to a rig. Want a character with 4 arms? 2 heads? it’s possible

However, in my experience, autorigs also come with caveats:

  • They take the rig out of the rigger’s hand and lock him into making a rig that the autorig can make. If you need some custom behavior, you probably will need to make that a new component of the autorig or include that in a script that will be run once the rig is done building.
  • Maintaining an autorig takes time and effort (My last job was being the full time maintainer of the an autorig for a year)

A workflow like that that is so widely used should, in my opinion, be part of the 3D Applications rather than be external tools.

Proposal

First things first a nodal view of the rig makes the most sense to me. With “Everything Nodes” coming, I’ll assume that this is the direction that Blender will take for rigging.

What I’m proposing is:

1. Rig Components.

In my mind, a rig component is nothing more than a node group, very similar to what the shader editor offers:

  • You can expose internal data as inputs and outputs of the group.
  • A component can be as complex or simple as you want it to be. here are a few examples of components I have in mind:
    • arm, leg, spine, etc.
    • A complete character rig.
    • A chain with IK/FK switching.
    • A simple bendy bone with drivers on each.
  • They can be “instantiated” or duplicated. Modifying the original component should ideally be reflected to its instances, if the users wants that.
  • They can be nested.
    • The Complete character component would only be composed of arm, leg, spine components.
    • An arm (or leg) component would be composed of:
      • An IK/FK switch component.
      • A bendy bone component for the upper arm and another one for the lower arm.

With the asset manager coming up, I think it would be really great to be able to quickly import components in any blender scene.

2. A sprinkle of proceduralism.

Let’s have a few examples of what I have in mind:

  • You have your bendy bone component but you want that to be used in a game engine. You need a bone per bendy bone segment. The segment count would be exposed on your BBone component and when updated, blender would automatically create (or delete) deform bones and constrain them to the proper BBone segment via a Copy Transform constraint.
  • You have an FK Chain Setup that has a “Chain Length” attribute. Just like with the previous example, this needs to create/delete bones based on the attribute.
    You could also have a Bool attribute that if checked would add a BBone component on each of the FK bone.

3. Implement some constraint in the edit mode of the armature.

This would add the possibility to create Guide Bones that would drive the rest of the armature without the user having to move every single bone manually.
The edit mode constraints should not be related to the dependency graph outside of edit mode. It would be it’s own independent little thing that would absolutely not slow down the rigs in any way.

This would work really well in a component based workflow as component authors could expose the guide bones and hide the rest of the setup to the user, making it really simple and fast to add new component to your rig, even if the proportions are widely different from the original component.

I’m actually working on an addon that does exactly that. Being written in python, there are performance issues but you can see it as a proof of concept: GitHub - HolisticCoders/edit-bone-constraint: Add Constraints in an armature's edit mode to easily modify its rest pose
It basically implements a few constraints and a simple dependency graph that gets evaluated only while you’re in the edit mode of an armature.


On top of all of that, Blender could probably ship with its own curated components that non-rigger users could quickly use, letting them work on what matters to them.

Notes

I have purposefully not put much thought in the actual implementation (especially for the procedural part), just the end user experience.

My experience with rigging in Blender is still very limited as I’m mostly used to Maya. Forgive my ignorance if some of the stuff I said in the first part doesn’t apply to Blender. However I know the software fairly well in the other areas as I’ve been using it for a good amount of years.

The goal here isn’t to lay down a definitive version of this but more to open the discussion early on to make blender’s rigging the best it can possibly be.

Hopefully what I’m proposing makes sense to you and you see the benefits of it. :slight_smile:

23 Likes

You’re right. Blender rigging blender rigging!!

2 Likes

I hope there are enough advanced users here as to understand all the implications there will be increasing productivity for their work on Animation and Rigging. I do. I come from a lot of nodals (Nuke, PfTrack, ICE, Softimage, Little bit of Houdini) so it´s the next natural step on Blender.
I guess once we fix the memory issue with animation nodes, this could be a bliss (rigging nodally).

3 Likes

I hugely liked your proposal. I’m in a quest to improve myself in rigging. Maybe a rig guy in blender :stuck_out_tongue: And this approach looks great, want to reuse parts of a rig, but without fiddling my head around some complicated code

1 Like

Makes sense. I have used Animation Nodes for facial rigging, using that system to create complex drivers for many shapes with few controls. Nodal rigging is really nice, the tricky part is making it run quickly.

I have also used ICE to do the same in Softimage, which proved to be very powerful (auto IK switches on collision, assisted path ground stick for walking, etc). It opens doors when rigging in visual scripting and modular approaches.

But indeed you’d need some good default and quick playback/evaluation. I’m pro your ideas!

2 Likes

i too used to work with maya, while i mainly was doing modeling…etc and not much rigging, but i watched few “Cult of Rig” streams and the first thing he talks about is component based rigging just like OOP for programming , so i guess it’s a powerful workflow for TDs.

I don’t think Blender’s advanced riggers will be against those ideas.
It is already what they tried to achieve by combining their efforts into Rigify addon which is an autorig addon that let’s ability to user to define its own rig types and metarigs.

When we are talking about “everything nodes” project; what is most commonly mentioned is getting rid of constraints stack to replace them by nodes. Because it is a visual mark that is meaningful for all users.

But of course, procedural generation of rig is an included target, currently handled by python.
What Draises said about Animation nodes addon proves that riggers are already experimenting with rig creation by using nodes.
I trust riggers of community using Rigify, BleRiFa and Animation nodes addons to give a pertinent feedback on “everything nodes” project.

Where can we give feedback to the process? By proposals in Rclick Select?

I know that with Everything Nodes, when we have the Matrices and Shape data exposed in a node friendly way, most rigging work will be possible to replace/expand driver workflows and then work on making extra functionality with compounds(node groups) will be quite easy - that could be the start - and it would need to somehow either replace, mimic, and\or work with the current rigging systems.

After that you’d need yes, IK nodes or compounds, Auto-Collision nodes or compounds, Corrective Shape driver nodes or compounds, fine control on weights, and other compounds that would mix in physics into human control… but that would be a different story with many other elements.

1. The main key feature to develop here is speed. You want a node compound or a single evaluation node to run as quickly if not quicker than a traditional driver coded into the Fcurve editor, or else it’s only good for prototyping and the rigger will eventually have to anyway re-build his math in drivers into the rig to up the FPS playback and animator friendly feedback, and the node based rigging would be ultimately unusable.

2. The second main thing is UX design consistency (eg: when you make a UI driver in the Fgraph, it will automatically build that driver in the node-tree of that rig, ready to be exposed to nodal workflows that are hard to code in the driver UI opening doors to what nodal rigging can do automatically)

Example of speed and the inconsistency issue: my AN shape driver system on 150 driven shapes was 15fps playback, and with drivers the FPS would be 30+ FPS. The UI was running at 4+ms so eventually editing the node-tree had to be updated on edit and not in realtime, this proved that I had to create a driver workaround outside the nodetree and drive something that was less evaluation heavy inside the nodetree - with driver/node hybrids then bypass any direct shape driving)

The issue of doing separate workflows that aren’t built in harmony and consistency make both UI and node dependant rigs heavy to evaluate (as the node tree will be polling on top of a rig poll), harder to learn, and more confusing for the end user (two areas to troubleshoot instead of the old one area). But the fix is unification and exposing the inner workings of a surface GUI as a visual programming language that can still give a powerful, quick to learn and flexible UX to the rigging process.

Currently functions branch is an embryo of everything nodes.
I did not follow progress on it recently. But probably, you can’t rig anything with it, yet.
So there is no ability to give feedback on a work that had not began, yet.
You will have to wait for that.

right-click-select is a place for feature request. But it is probably a better place for small feature request.
For a proposal of a big project may receive a big amount of votes without really knowing what is appreciated in the proposal and what is not.

If you have a clear idea, you can share a detailed document, here.
If you want to take contact to devs to be involved and collaborate about rigging stuff, you can try mailing lists or iirc channel.
https://wiki.blender.org/wiki/Communication/Contact

Animation nodes for 2.80 will probably just be here to avoid a fall in usability compared to 2.79.
I would expect improvements in an official branch dedicated to everything nodes project.
But that would probably take one or two years.

Just found this awesome video from Disney Research and wanted to share it with you guys:

It’s not related to any software

With regards to speed, Blender’s armature modifier is currently really awful in terms of performance-- compare to any game engine skeletal deformation. It’s bad enough that it’s hard to imagine that a nodes based rigging system would lead to any noticeable slowdown in comparison. It would have to be really bad. Like recompiling every frame bad.

Meanwhile, even if something is good only for prototyping, that’s a real good use. I came to Blender from writing some HLSL for game engines and my first thought about material/compositing nodes was, how cumbersome. But HLSL has informed my nodes and my node designs have informed my HLSL. WYSIWYG is a great way to code, and nodes are coding, just with a GUI.

When it comes to rigging/constraint nodes, I am personally just scared that we won’t be given the tools we need to actually do this. The focus of existing constraints on Euler angles is a serious problem that developers are starting to recognize. We need arbitrary math, which I expect. But we also need full transform matrices, not just a set of Euler angles, scale, translation values from which we can’t deal with things like inherited skew. Given that these are absent from material nodes, despite the fact that they would be useful, given that material nodes don’t even have a reasonable format to deal with 4x4s, it’s very easy to imagine a rigging constraint system that doesn’t provide all the tools it should.

Beyond that, yeah, I would love to see “rest transform”, “rest length” inputs and a rest pose output with a bake button next to it that would allow us to maintain a separate set of “edit mode constraints.” Dealing with trying to customize a Rigify armature is a nightmare of resetting stretch-to rest lengths in precisely the correct order… Nope, missed one, do it all over!

Registered just to say that node based rigging is the thing that I really want to see in Blender as a rigger and animator. I assume, only after that’s implemented (and of course it should be fast :3) we could see some big animation studios switching to Blender.

Rigify is great and I use it all the time but at some point I came to a limit. When I have retargeting, interacting with floor/other objects/rigs, animating some complex extreme motions, real-time IK/FK follow, full body IK, etc. For most of that I can create additional rigs and scripts but that’s pretty complex. I want to see the whole rig with all its connections (modifiers, constraints, drivers, children, parenting) as nodes as the whole picture instead of messing with bone layers, selecting each bone to edit constraints, etc.

Creating and adjusting rigs with code is also not so convenient. Again the main issue is that I just can’t see the whole picture due to the large amount of connections between bones and their parameters. It’s getting harder and harder to add new things and relations at some point. Another big question is modularity and ability to easily copy the whole rig (or part of it) and create new one. And of course real-time visual feedback.

So I think node based rigging should have high priority.

P.S. I’m new here, so can someone tell me who’s working on animation part of Blender (somehow related to this thread) actually or where I can get this info? Thanks.

I’ve been working on an addon the last couple of weeks to test this idea. I decided to make it public once I was able to make an IK-FK switch with snap options by only using nodes. You can check it out here:
https://blenderartists.org/t/rigging-nodes/1238994

1 Like

Looking forward to more development in this area, potentially very helpful. Thank you.