I was curious as to if blender had considered implementing an auto lip sync feature. By this I mean you put in the audio dialogue file and a feature auto matches different sounds and vowels from the animation rig to the file. A good example is the auto lip sync feature in adobe animate cc. I was just going to mention this here because it can make a great addition to the program.
Maybe if you defined your own key-frames for each sound, every rig is different.
you would have to look at the design of an auto lip sync function. that’s what you do you make different mouth positions and it matches it to the sound. but there is adjusting afterwards but it is a very efficient way to do lip syncing
I am not sure if it’s a good idea to keep this in core Blender or rather have this as a standalone add-on.
Lip sync sounds like its own big project that can improve a lot it there is a big interest in the community.
This is a creepy example but you can already do this with Rhubarb which also works with OpenToonz: