Neural Rigging for blender (with RigNet)

Hi everybody,

I have worked on a better integration of the RigNet addon

Here’s what is new in version 0.1 alpha

  • Removed dependencies: binvox, open3d, trimesh
  • Auto install pytorch
  • Left/Right names for bones
  • Post Generation utilities: merge bones, extract metarig
  • Samples control
  • Progress Bar
  • Can skip bone weights prediction

this tool covers the task of assigning deformation bones to a character. Traditionally, it needs to be performed manually, is different every time, and after a while it looks like doing the same thing over and over.

This description fits a class of problems that are hard to translate into conventional algorithms, but on which, given enough data, a reasonably accurate statistic can be built. Tools that use statistics to automate a procedure fall under the field of Machine Learning, which are more and more widespread with the increase of computing power.

RigNet is a Machine Learning solution that can assign a skeleton to a new character, based on extrapolations from a set of examples. It is licensed under the General Public License Version 3 (GPLv3), or under a commercial license.

When I saw skeletal characters coming out in their presentation, I knew that I wanted something like that in blender. So, when the code was made public, I wrote a RigNet addon.

With python as a programming language, making it “speak” with blender was no big deal, but the need of 3d party modules made it difficult for everyone to use.

Dependency Diet

The first version of the addon resorted to trimesh and open3d to handle 3d operations, which is redundant inside a full fledged 3d app.

Binvox, a stand alone tool used to extract volumetric representations, was problematic too, as a binary, non-open source part of the bundle.

When the blender foundation backed the project, my first concern was to eliminate all 3d dependencies. The script still needs pytorch and pytorch-geometric, but their licenses (modified BSD and MIT, respectively) allow their inclusion in a free software project.

As an additional constraint, pytorch must match the CUDA version installed in the system. In the end, I have added an auto-install button to download the missing modules. It uses the Package Installer for Python (aka pip) and virtualenv behind the scenes.

Installation

Install the archive, Neural Rigging is listed in the Rigging section

Installing pytorch can be tricky, and usually is done at the beginning of a coding project, with tools like virtualenv, which is part of python, or conda, a proprietary installer. Anyway, it is different for addons of a bigger application.

CUDA is a requirement. This could change in the future, as the torch-geometric library has recently added cpu support. At present prebuilt packages support CUDA 10.1, 10.2 and 11.1.

Owners of nVidia hardware can install the CUDA toolkit from the distributor’s page

Once expanded, the preferences display the system info and the missing required packages.

If Cuda is found in the system, the Install button can be used to download pytorch to the designated location. By default, the subdirectory _additional_modules in the addon path is used.

It may take time because it has to download the whole pytorch library (2 GB). Some users may want to run it with the console window on.

Trained Model

A Machine Learning tool needs the data from a training session, or it won’t be able to run. The Model Path is the folder where the results of the training are stored

The developers of RigNet have shared their trained model on a public link which can be opened by clicking the Download button.

The default location is the RigNet/checkpoints subfolder of our addon directory, but we can choose another path if we wish.

This model has been trained from about 2700 rigged characters that can be downloaded here. RigNet’s GPL-3 license and the optional commercial license apply to the trained model too.

Once all the requirements are fulfilled, the addon preferences should look like in the picture below, and display no warnings

Remesh, Simplify, Rig

Characters do not always consist of one single mesh, so the addon asks for a collection as input. If we want to create a new collection, the utility button next to the property will do that from the selected objects.

Then we need a single, closed, reduced mesh for our computations. We can generate one using the button next to the mesh field.

The Voxel Size and Face Ratio can be tweaked, but the result should not exceed five thousand triangles. The addon panel displays the current face count, and a warning when there’s too many.

Parameters

The original RigNet tweaked a bandwidth parameter to deliver more bones with lower values. I have inverted that parameter into Density, which hopefully makes more sense to the user, that adds more joints when set to greater values. Denser rigs require more GPU memory, so more powerful hardware is required to generate rigs with more bones.

Joints with a weighted influence lower than the Threshold value will be ignored.

While RigNet processes the geometry using 4000 samples, I have added a Samples parameter, as lower samples sometimes deliver faster and better results.

Animation controls

Animation rigs usually have additional controls besides the deformation bones. Rigify is the control generator included with blender. The usual workflow for auto-rig systems is “bone first, bind later”, but we have a bound skeleton already, so we are going to do the inverse.

We can adapt our bone layout to the one expected by rigify, add rigify attributes to the bones of our rig, and use a converter included with the neural rigging addon that creates a rigify base. From then, we follow the standard rigify workflow, like if we were rebuilding a rigify character.

This reverse workflow and the tools involved, would better be discussed with Rigify developers, and hopefully included in Rigify.

Is blender ready for Machine Learning and AI?

Of course it is: the addon system is very flexible, and many AI/Machine Learning projects are written in python, making it reasonably easy to bridge the two worlds.

If anything can be improved, making some of the internal voxelization and sampling functions available to the addons would make the job easier, and the execution faster.

An official solution for expanding the interpreter would be nice as well: something like a virtualenv for the addons.

Last, operators that take much time might benefit from a progress bar or some other way to inform the user about the current stage.

Is Neural Rigging ready for blender?

It’s starting. Though not a new field at all, widespread application and diffusion of Machine Learning is relatively new. At present, rig prediction helps skipping the most obvious steps in a character setup, but the delivered result usually needs polishing.

The requirement of 2 Gigabytes in 3d party libraries is quite unusual, and hopefully improvable, and the trained model plays a key role, in that the actual activity of the tool sits in the data rather than in the code itself.

The original RigNet is released under a dual license, which implies that it can be freed from the restrictions of free software if bought from an authorized vendor.

The blender add-on is GPL only, but contains RigNet as a component. This condition is new to me: technically everything should be fine as long as the GPL is honoured, but it’s better to contact the original authors (tto[at]umass[dot]edu) if commercial implications arise.

Is Neural Rigging going to improve?

Adding more examples to the training dataset could be the first step for improvement. Also, torchscript could help make the addon faster and more portable.

It would be very interesting to add fingers to the dataset. Or we could remove the constraint of symmetry and treat the hands like characters: after all, fingers are human tentacles!

The progress of ongoing research may bring novelties as well, so we’ll better stay tuned.

Please, have a try with addon if you wish: I would love to have feedback and improve the alpha. Also, I could not test the addon on linux yet, so every report of a linux experience is welcome: especially the install.

Thanks for reading all this,

Cheers,
Paolo

32 Likes

This is huge to me! What a time saver! I will definitely take a look into this when updating my rigging pipeline.

Great work Paolo!

Obvious questions: is the pytorch install a separate module which could be reused by other ML addons? And is it possible to have a shared install location for pytorch that other models could run on? There are lots of potential ML tools which could be integrated in Blender if pytorch / tensorflow were available

Hi theois,

the default location for the pytorch install is a virtual environment in the addon folder. After the addon is loaded, pytorch is available everywhere in blender.

Technically it’s possible to use any directory, or an existent conda/virtualenv. For now it’s up to the addon developer.

It would be nice to have “environments” as a feature, it would allow to add sets of module and assign them to the addons that need them.

It is something that I need to investigate: a separate addon might provide such functionality.

1 Like

Hello.
I am trying to get it to work on Linux (Kubuntu 20.04).
For now I have been able to make the addon detect CUDA 11.1.
Now I am having trouble with the “install” button for modules. When I press the “install” button I get an error message:

*The first time I press install button:

Traceback (most recent call last):
  File "/home/yafu/.config/blender/2.90/scripts/addons/brignet/preferences.py", line 27, in execute
    venv_utils.setup_environment(env_path)
  File "/home/yafu/.config/blender/2.90/scripts/addons/brignet/setup_utils/venv_utils.py", line 237, in setup_environment
    ve_setup.create_venv(with_pip=with_pip)
  File "/home/yafu/.config/blender/2.90/scripts/addons/brignet/setup_utils/venv_utils.py", line 22, in create_venv
    venv.create(self.env_path, with_pip=with_pip)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/venv/__init__.py", line 390, in create
    builder.create(env_dir)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/venv/__init__.py", line 68, in create
    self._setup_pip(context)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/venv/__init__.py", line 288, in _setup_pip
    subprocess.check_output(cmd, stderr=subprocess.STDOUT)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 411, in check_output
    **kwargs).stdout
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/home/yafu/.config/blender/2.90/scripts/addons/brignet/_additional_modules/bin/blender', '-Im', 'ensurepip', '--upgrade', '--default-pip']' died with <Signals.SIGABRT: 6>.

location: <unknown location>:-1

*The second time I hit install button:

Traceback (most recent call last):
  File "/home/yafu/.config/blender/2.90/scripts/addons/brignet/preferences.py", line 27, in execute
    venv_utils.setup_environment(env_path)
  File "/home/yafu/.config/blender/2.90/scripts/addons/brignet/setup_utils/venv_utils.py", line 255, in setup_environment
    subprocess.check_call(torch_install_script)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 358, in check_call
    retcode = call(*popenargs, **kwargs)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/home/yafu/blender-2.90.1-linux64/2.90/python/lib/python3.7/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: '/tmp/torch_install_vxtzoj9t'

location: <unknown location>:-1

I’m not sure if this could be related to “pip”. In Ubuntu and Ubuntu derived distro, If you install pip from repositories “pip” command is used for python 2.x and “pip3” command for python 3.x.

Hi Yafu, thanks for sharing your Ubuntu misadventures :slight_smile:

This seems more of a permissions issue: I might have overlooked something in the Linux section.

I have to test on an actual Ubuntu machine with CUDA, it won’t be possible until next month.

In the meantime you can try manual install using virtualenv from a bash prompt

Just make sure that you are using the same Python version as reported in the addon preferences.

I am sorry not being able to help better at present, I have not used Ubuntu for a long time

Please let me know, if you wish, how manual install works

cheers,
Paolo

1 Like

OK no problem.
I’m going to try manual installation of the modules in the meantime.
Thank you.

Hello Paolo
This is great! It is already a long time, +7 years, that I see coming AI into 3D production, animation, modeling and now rigging. I follow as much as possible research papers on these topics as well as SIGGRAPH where we will see some interesting projects this year.
I already have developed some tests AI based modeling script using a comparative analysis of models produced for scenes VFX in TV series. Machine learning will be used as it is a necessity given the need to speed up production and realization. On one side AI and machine learning make sense, on the other side, all projects I have see until now are starting from scratch and very few projects are reusing data from existing models… The main reason is the time it takes to do the modifications needed for new shapes, structures or simply designs… AI will help in that by allowing artists to layout a shape or an architect to draw some main shapes and the AI could generate the articulation of elements within the specs provided by the artist/architect/doctor/etc… I am going to try your code and let you know how it went… Thank you for your work.