We’ve mainly thought about it as a way to make tweaking values in the node group faster when nothing else is changing.
Yes, there seem to be different expectations of where caching would be employed:
- Inside node trees to retain the output of individual nodes. Avoids downstream node evaluation.
- For the entire object’s geometry output. Avoids dependent object updates.
How to decide which nodes or objects to cache persistently, without racking up too much memory? Easiest solution would be to give users control, so they can decide which nodes are expensive enough to warrant caching (maybe some nodes like boolean can have caching enabled by default).
The problem of detecting cache invalidation does not exist on the per-node level because nodes are associated directly with the data they output. Depsgraph only has very informal mapping to object data with its component types. Using the depsgraph for invalidating a cache could potentially have quite a few false positives and negatives:
- false positive: cache is invalidated even though the actual data is not affected, leading to unnecessary recalculation.
- false negative: cache is not invalidated because input data is not considered part of tagged components, so it gets “stuck” and users scratch their heads why things don’t update.
The Object::runtime data is essentially already a cache with arbitrary data, but it’s only used for “caching” between depsgraph operations in the same frame and not persistent. The Point Cache has persistent storage, but it’s not general-purpose enough currently to support all types of geometry. Point cache is also only available in certain contexts (physics sims) and not as general caching, let alone providing a cache per node.