Some early news:
I looked in preprocess cache, and found, that it is unfinished or what. This means, that it’s not working. Here’s a snippet:
void BKE_sequencer_preprocessed_cache_put(const SeqRenderData *context, Sequence *seq, float cfra, eSeqStripElemIBuf type, ImBuf *ibuf)
{
...
if (preprocess_cache->cfra != cfra)
BKE_sequencer_preprocessed_cache_cleanup();
...
This cause any new frame to erase the cache. So there are no wasted resources.
To be fair, I didn’t realize, how much memory you need just for one frame. Now I am curious if other similar software use some kind of compression algorithm.
I added frame cost to seq cache. cost = (1/set_FPS)/time_spent_rendering_frame. So if cost < 1, frame renders fast enough.
I implemented cache viewer similar to what’s used in movieclip. Frame cost is shown in color scale - blue is best, red is worst. This reveals some interesting patterns…
After this I tested simplistic prefetcher. It works, I will have to implement freeing of “used” cache frames to be able to prefetch indefinitely.
Here is clip of playback with cacheview(cache for strips is disabled)
Interesting are strong blue strips, that are always on the same spot and evenly distributed. Red presumably indicate overhead from opening new filestream.
My plan is to finish this prefetcher with strategy: prefetch n future frames, if needed free “used” cache with lowest cost. This is good strategy to play back long parts of timeline.
Then I will look at how movie files are loaded and try to optimize this a bit. General idea is, that if some file plays smoothly outside of blender, it should play smoothly in blender. Without proxies of course.
After that we can implement prefetchers with more strategies - such as far lookahead for parts of timeline, that render so slowly, that “play” prefetcher would not be able to render frames fast enough
Editing prefetcher, triggered by user making edits - remember edits, keep sources in cache, possibly start prefetching result
These changes would render main thread mainly as a cache viewer, with prefetch threads maintaining cache depending on chosen strategy.
This process has to be automated, and I think, that we have to recognize workflow patterns.
Resources are scarce - with 16GB of cache you can store only 1 minute of full HD 60fps 8bit footage.
Speaking about cache viewer, which in context of proxies sequencer already is, user should be able(and pushed) to make proxy on some time consuming part, when he is satisfied with edits. then this 1 minute of cache should be enough for a lot of tasks.
In my case I have only 1GB of cache(4GB RAM, don’t laugh) and with proxies on effects I can at least preview timeline in real time, so even low-end users can be satisfied.