Hi all,
I’m Lucas and I’m excited to work on improvements to the Sequence Editor waveform drawing routines. Although I’ve contributed a couple of patches to Blender before, I’m a big open-source noob and I am excited to join the community and add to the project.
You can see my project proposal here: Blender project proposal - Google Documenten
Synopsis
Blender supports video editing through its video sequence editor. While the editor allows users to load videos and audio files, computing the audio waveforms for the audio tracks can take a really long time when working with large files (multiple gigabytes). This makes for a degraded user experience.
This project will reduce the time taken to see the waveforms by:
- Processing multiple audio sequences in parallel in the background,
- Only computing the waveforms of sequences that are visible in the user interface
Once these initial speed ups are achieved, I’ll explore improvement opportunities lower down the audio processing stack.
Benefits
By speeding up the waveform computation in Blender, we’ll be able to reduce the amount of time computational resources locked up in order to generate the data in the first place.
Creators will benefit from more immediate feedback whenever they add new sequences to their project. Blender will provide an enhanced and polished user experience with this strategy.
Deliverables
- Produce benchmark reports on Blender’s waveform computation
- Change waveform computation to be performed only for visible strips
- Compute waveforms in parallel
- Experiment report on changes to AUD_readSound
Once GSoC starts, I’ll post weekly updates in the comments.