My goal is to write a Python script to export separately all the collections in a blend file to Alembic files.
This what I have :
import bpy for collection in bpy.data.collections: path = bpy.path.abspath("//") path += collection.name.replace(' ', '_') + ".abc" bpy.ops.wm.alembic_export('EXEC_DEFAULT', filepath=path) print("Exported file : " + path)
First of all, this operation needs to run in background but in the documentation it says that :
as_background_job ( boolean , ( optional ) ) – Run as Background Job, Enable this to run the import in the background, disable to block Blender while importing. This option is deprecated; EXECUTE this operator to run in the foreground, and INVOKE it to run as a background job
But running with
'INVOKE_DEFAULT' opens the file browser. It looks like it’s the opposite. (it works with
Then I need to specify a specific set of objects (here each collection). One way seems to use the
selected option after selecting all objects in that collection but I came across @sybren 's page where it says this :
Integration with the 2.8 branch implies that we can use Collections to define a mapping between objects in Blender and information in an Alembic file. Until collections are fully implemented, we use the active SceneLayer to define this mapping. Since this collection is saved in the blendfile it should be easy (for an artist) to re-export changed data to Alembic, or to switch between showing “real” data and loading from Alembic.
- Set of objects to export to / import from a file == collection
- Export multiple collections in parallel, so that the timeline only needs to be iterated once (idea by Jason Schleifer).
Is it still relevant today? If there’s a mapping between collections and Alembic data structure, it could help