Keyframe_insert complexity grows when adding a lot of keyframes

Hello!
I’m writing a plugin for Blender to import custom model/animation formats and have encountered a problem. The models that I need to import could contain up to several thousand frames of animation and I fill the frames on the armature using keyframe_insert. It worked fine on small animations, but when I tried something longer the import time just exploded.
I wrote a simple benchmark to measure performance of keyframe_insert and it looks like its run time grows proportionally to the number of keyframes that exist on an object. Here is the code for the benchmark:

import timeit
import bpy
import matplotlib.pyplot as plt


ob = bpy.context.selected_objects[0]
runs_per_frame = 10_000
max_frames = 5_000
frames_step = 25

frames = []
results = []

for i in range(0, max_frames, frames_step):
    print(f"Timing frame {i}", end="\r")
    frames.append(i)
    results.append(
        timeit.timeit(lambda: ob.keyframe_insert(data_path="location", frame=i), number=runs_per_frame)
    )
print("\nDone. Plotting...")

plt.plot(frames, results)
plt.show()

Of course, you need to have an active object on your scene to run this.
If run the benchmark one, and then re-run it on the same object, it is apparent that complexity of keyframe_insert stays on the level of the previous benchmark end until new keyframes begin to get created.

Link to Imgur post with benchmark results

This is a big problem for me as I become practically unable to import large animations and I really need to.

Is this considered an issue? Where should I report it? Is there maybe a direct way to bulk-set keyframes on an object avoiding keyframe_insert?

1 Like