Detecting Object and data animations resulting from constraint and/or drivers

I am working on detecting and exporting all animations of objects within a scene for the scene actions. The easy part is to simply fetch all actions and from there i fetch all FCurves and i am done… almost. With this approach i only catch the directly animated objects.

It is a bit more tricky to find all animations that happen via constraints and drivers. I have an idea how this could be solved but i am not sure if i am making things too complicated here. Here is what i sorted out so far:

  • Evaluate the first frame of the scene actions (timeline).
  • Make temporary reference copies of all exported objects and their data on the first frame.
  • Now step through the frames and evaluate the scene on each subsequent sample frame:
    ** check all object attributes and all object data attributes against their reference copies
    ** Whenever an attribute has changed significantly from the reference data, begin with recording the attribute for the remaining sample frames
  • When the sampling is done i have a list of records of all animated object and object data attributes regardless if they have associated FCurves or if they are animated by constraints or drivers.
  • Now i can use the list for whatever (for example for exporting the records as animation curves)

There are some details missing in my description aboive. But here i am asking only if making temporary reference copies of all exported objects is a good idea or if this is totally too much. I wonder if there is a better approach to find out which object and data attributes are actually animated within a scene.

Thanks for any tip on this :slight_smile:


1 Like

Python exporters use scene.frame_set() to step through the frames that they need to export, and then revert back to the original frame when done. So that part make sense to me.

If you are intending to copy the Blender data that may end up somewhat complicated, if it’s the converted Collada data from the reference frame it may be more reliable.

In general properties may be remapped to have another value, or affect multiple properties in the Collada file. And so if you are comparing Blender object properties, you get duplicated conversion logic which is likely to have bugs and go out of sync.

If on the other hand you can compare the converted Collada data between frames, the code may end up simpler and more reliable. I’m not sure if or how that would work with the OpenCollada API. You could always have an intermediate data structure rather than writing to OpenCollada directly, but that’s a bunch of extra code of course. Still it may be simpler than duplicated conversion logic for animation, it depends.

Hi, Brecht, thanks for responding.

Actually the BCAnimationSampler class that i have programmed so far uses an intermediate data structure and it is completely independent from Collada and openCollada. The only dependency to collada is that the collada exporter feeds my module with the objects it intends to export, and it gets for each object back a list of animation curves where each animation curve is mainly a std::map<frame,float> The data is always directly related to a Blender object attribute or data attribute.

The collada exporter itself then either exports the curves as they are or it converts or combines data (for example animations of camera->lens and camera->sensor are converted to camera xfov )

My question was mostly about how to detect animations that are not defined with fcurves. And i only wanted to know if making intermediate reference copies of the exported objects and their data structures is a convenient method to decide if an attribute is animated or not.

so what i will do next is (unless i get a string veto of course :slight_smile: :

  • Create reference copies of each exported object
  • For each sampled frame check all object and data attributes for changes.
  • When a difference to the reference data is detected the attribute is marked as animated and sampled
  • When the sampler is done it deleted the reference objects and returns the list s of sampled data.
  • The collada exporter then will do all the collada specific stuff

It’s up to you. I just want to make it clear that from my experience, making animation detection dependent on the target data structures (Collada) and independent of the source data structures (Blender) is usually more reliable than what you are planning to do. To avoid different code paths for animated and non-animated data conversion and the bugs and code complexity that often leads to.

Hi again;

So would this be a more acceptable (less troublesome) approach?:

  • Create the complete Collada export data for each sample frame as if it where a static export.
  • Then compare the sampled Collada datasets to find which of the exported values is animated.
  • And finally extract the Collada animation data from the Collada samples.

I could take a look at that, but our collada exporter already has its own blender-profile and actually exports blender specific data (i am not guilty here, that’s just what i found when i started digging :slight_smile: ). Adding the animation FCurves for the blender-specific data set is rather straight forward. And somehow it looks much easier to me to find the animation data in the Blender format instead of digging in the more complex collada format.

Anyways, what i originally had in mind when creating the BCAnimationSampler class was to create something independent from Collada that could be used later from other exporters as well by adding a python api for example. I was chatting with other developers about this idea and it looked like a wanted feature.

That’s what I would suggest doing typically, but I’m not so familiar with the Collada code so I can’t say for sure if it’s better in this specific case.