2022-07-28 Pipeline, Assets & I/O Meeting - Collections for Import/Export Design

Everyone is welcome on the Google Meet linked below every other Thursday, at 17:00 CEST.

Present: Aras Pranckevičius, Bastien Montagne, Brecht Van Lommel, Eskil Steenberg, Michael Kowalski, Sonny Campbell, Soslan Guchmazov, Sybren Stüvel, Zhen Dai

The meeting covers the pipeline & I/O module in a broader sense, including some topics hosted in other modules (e.g. some I/O python add-ons, or overrides and .blend file I/O from the Core module).


Collections for Import/Export Design

The main goal of this meeting was to kick-start a more concrete design and planing for the ‘Collections for Import/Export’ idea.

The general idea behind collections I/O is, at the basic level, to make it easier for the user to repetively import and/or export (a subset of) Blender data. Export could even be performed toward several different formats and/or paths at once.

It could also be one of the bases needed to add support to some level of non-Blender data editing, e.g. to allow some control over the composition of USD layers into usd stages.

There can be two approaches when it comes to manipulating external, non-native data:

  1. Convert ‘as best as possible’ all external data into Blender data, do the editing, and then export it back ‘as best as possible’.
  2. Add dedicated tools to manipulate the external format from within Blender, without necessarily importing everything in Blender data.

The first case is the simplest: eveything is defined in Blender, as local data. Collection IO is then a simple wrapper around the I/O operator code, with storage of some meta-data needed by the importers and exporters (file paths, operator settings, etc.). In a second stage, it could also have an optional node system to apply complex and customized processing on load & save.

The second case is more complex to properly design and implement. It could be based on the following ideas:

  • Consider the source data as a library, Blender data from it being considered as ‘linked data’ and therefore not editable. Possibly with dynamic import of a subset of the available data, and/or generation of light place-holders (to alow better support of extremly heavy/complex source data e.g.).
  • Data to be edited in Blender is defined as a ‘library override’ of the ‘linked data’.
    • How to define ‘override rules’ when exporting modified data? USD has several ways to override data, most other formats have none… Could be done automatically, and give more control over this through the future node system?
  • Needs the ability to dynamically add/remove data in Blender. Would be needed e.g. when reading ‘animated cache’ (like Alembic files) that add or remove objects. If done at depsgraph/evaluation level, could be tricky to support, but maybe it could rather be done in the Main data context before any depsgraph evaluation, e.g. on frame change?
  • Rendering:
    • If Blender draws everything, it needs all (supported) data in Blender, the rest is not rendered.
    • In USD case, if using USD rendering delegate, Blender would need to re-export its subset of edited data in the USD scene graph.
  • Use the Outliner as a viewer of USD graph?
    • What do we show there? Final stage (result of composition), or original layers of data? Probably makes more sense to show the final stage?

To get further into this projects, two main tasks were defined:

  1. The skeletton task (‘Collections for Import/Export’) needs to be fleshed out, and extended with several sub-tasks.
  2. A first prototype would be great to explore further the topic. It would focus mostly in the ‘simplest case’ described above, without the concepts of libraries/library overrides, or stage composition.

The tasks and prototype could cover and implement the following:

  • Add some new data to the collection (file path, IO options). Probably as a list, since there would be one importer
    • IO options defined by the IO operators themselves ?
  • Investigate between:
    • Having two types of entries in the collection, importer and exporters, with storage of ‘metadata’ in the collection (as e.g. a way for the importer to communicate with the exporter).
    • Having a third type of entries, combining import and export, for the exporter to have access to the same source of information as the importer.
  • Need some evaluation mechanism on demand (e.g. alembic animation, on frame changes).
  • Ability to switch between Blender data ‘source of truth’ and imported data? E.g. switch between the regular Blender collection containing the animated geometry deformed with an armature, and the Alembic collection containing the cached deformed geometry?

Michael and Sonny are interested in working together on this. The core Blender team cannot dedicate implementation time for this project currently, but is always available for design discussion and support in general.


Aras asks about performance of export, current code spends a lot of time “composing” the USD Stage for each object being exported. This is just what happens using the high-level “Usd” APIs; current guidance seems to be to use lower level “Sdf” APIs if one needs to create/update a lot of items at once. No objections at experimenting with that.


Sybren confirms that there will be a beat release of Flamenco 3 in the comming week, the team is still looking for help to iron out the details, especially for testing and fixing on Windows platform. Feel free to check the project page if you are interested!

Other Topics

Plan: Rename module to just Pipeline module. Can be done officially in the meeting, if there are no objections. Was not discussed, needs more preparation/internal discussions first.

Next Meeting

The next meeting will be on Thursday 11th August, 17:00 CEST/Amsterdam time (your local time: 2022-08-11T15:00:00Z).

The provisional meeting agenda will be linked in the #pipeline-assets-io-module channel before the meeting.