Hi again;
So would this be a more acceptable (less troublesome) approach?:
- Create the complete Collada export data for each sample frame as if it where a static export.
- Then compare the sampled Collada datasets to find which of the exported values is animated.
- And finally extract the Collada animation data from the Collada samples.
I could take a look at that, but our collada exporter already has its own blender-profile and actually exports blender specific data (i am not guilty here, that’s just what i found when i started digging ). Adding the animation FCurves for the blender-specific data set is rather straight forward. And somehow it looks much easier to me to find the animation data in the Blender format instead of digging in the more complex collada format.
Anyways, what i originally had in mind when creating the BCAnimationSampler class was to create something independent from Collada that could be used later from other exporters as well by adding a python api for example. I was chatting with other developers about this idea and it looked like a wanted feature.