Hey everyone. I’m new to this but I want to know just how much heavy lifting would be involved to implement this paper.
GitHub: Open-source code
What makes this different from normal posebones is that they are able to twist and collide. That would be really great for real-time muscle sim. I imagine going into pose mode, moving bones around and watching the muscle objects stretching and colliding to make accurate muscle deformations. These deformations could be applied to a surface mesh via linear blend skinning just like normal posebones.
There are two routes I can think of.
Create a new object called a muscle that employs the above paper.
Try to implement this with the existing PoseBone architecture by combining multiple posebones with constraints. ( This is very similar to B-Bones, perhaps the solver could make use of these. )
For number 2, I feel like a python module could be implemented. But I feel it would be better to code this in C as I would imagine it would be faster. In this, a “muscle” object would be a collection of posebones, like bendy bones, whose position, rotation, and scale would be calculated by VIPER. I think this would be the best method as constraints would carry over as would linear blend skinning.
Does this seem like something that could easily be done? Also, if one does not use CUDA should I expect this to still run at real-time speed?