Geometry Nodes Caching Prototype

As part of my internship at Unity I was given the task to focus on a caching prototype for Geometry Nodes. The goal was to create a prototype and share it with you all to help kickstart the conversation around the various design decisions that will be needed for a proper proposal to be made. While working on this prototype I ran into various edge cases that I will walk through in this post.

This work is purely an experimental prototype and not a design proposal in itself.

A patch against master for this prototype can be found here.

Introduction

Caching within Geometry Nodes has been a much requested and talked about feature. Working with large and complex node graphs can get very slow when running in real time and caching would largely solve this issue by allowing computed data to be reused. The original design proposal for geometry nodes discussed implementing a cache node and there are multiple rightclickselect posts asking for the same thing (here and here). A more recent discussion under this devtalk thread built further on this idea by proposing two separate caching implementations - manual and automatic.

Manual Caching would involve the user controlling what nodes or sections of the node graph are cached.

Automatic Caching would involve the node tree evaluator storing the result of every node such that only the required nodes would be recomputed when something is changed.

This prototype implements a manual caching approach, however many of the ideas, implementation details and questions raised would likely also apply to an automatic caching system.

Demo

This is a demo video showcasing the caching prototype and example use cases. As you can see, nodes which process geometry can be marked for caching via an icon in the node header. Once marked, the outputs of the node will be cached the next time it executes. The cached output data will then be used every time this node is re-executed thereafter.

Caching is not available for math, attribute, field input nodes etc. This is because they either don’t process geometry and so are always extremely fast or they aren’t actually “executed” such as the field input nodes. Note that these nodes also don’t show timings when enabled in current Blender.

More Demo GIFs

NOTE: Many of the GIFs in this post need to be opened in a new tab to view properly.

caching_prototype_header

bottlecap

Benefits

In this prototype, a cached node usually takes between 0.1ms and 1.0ms. This is dependent on the size of the cached data and it scales linearly, therefore it can sometimes take even longer for large geometric data. Generally, a cached version of a node will always be faster for anything more than simple geometry.

This provides a massive speed up for slow nodes such as Boolean, Subdivision, etc.

As a result, boolean geometry can be manipulated in real time like so:

slow_boolean

Nodes that are gated by a cache will not be executed when the tree is re-evaluated and also when those nodes are updated, such as by changing a socket value or link, it will not trigger a re-evaluation either.

This allows users to section off slow areas of their node graph using a cache to greatly improve performance when updating the graph and node parameters.

bottlecap_optimization

In this GIF you can see the nodes that create the bottle cap being gated by a cache from the rest of the node tree. This allows me to update them without causing the tree to re-evaluate. However, when I update the transform node, which is not behind the cache, the graph executes as it is updated, allowing us to see the changes in real time.

This benefit is only possible with manual caching. It ensures that no unnecessary updates occur that won’t change what the user sees in the viewport and allows them to easily make necessary changes without disconnecting links in the tree.

Also note how the bottle cap nodes blocked by the cache don’t execute when I update the transform, as it would be redundant. This benefit would also be seen with automatic caching.

Design

This section will discuss the various design decisions I made and why, as well as their flaws. I would be very interested to hear any feedback and/or ideas you may have around this.

Why not have a dedicated cache node?

As mentioned above, previous ideas around caching discussed implementing a dedicated cache node. This prototype avoids the need for a dedicated cache node by giving every geometry node an implicit adjacent cache.

I implemented it this way after chatting with internal Unity artists who have significant experience in the VFX industry. They expressed that a dedicated cache node has been tried in the past in other tools and is a bad approach when working with large node graphs. It often results in users littering cache nodes throughout their graphs.

The main benefit of a dedicated cache node is that it could have a more detailed UI for more complex caching (such as a filepath for Alembic/USD). However, this could potentially be avoided if the generic node UI was extended.

Memory Cache

This prototype implements a “Memory Cache” in that it only ever stores the cached data in memory at runtime and never onto disk (either in the blend file or otherwise). This means that if a user saves their blend file with a cached node and then re-opens it, it will recompute the cache as the node graph is evaluated. The cache could be written to the blend file, but it arguably works fine either way in this prototype.

Another option is Alembic or USD caching, which would require specifying a file path and more parameters. These methods of caching also wouldn’t likely offer performance improvements, as they require writing to/reading from disk, and would be more useful for interoperability between applications. Different types of caching are discussed in this Blender Wiki article.

UI

By far the most important area to discuss with regards to a caching design would be the UI. This prototype’s UI breaks many of the geometry nodes UI conventions and is certainly not a good design. The reason for this is discussed here and in the next section.

Header Icons/Buttons

This prototype adds caching related icons to the header of each geometry node, similar to the nodegroup icon. When a node is not cached, a cube icon can be pressed to enable caching.

disabled_cache_icon

The cube icon is just a random one I chose as a placeholder. A specific cache icon would make sense to have.

Once cached, the cube becomes an X to disable caching and a refresh icon for recomputing the cache appears beside it.

enabled_cache_icons

This is the one area of the UI that I feel works well (aside from the icon choice), however other options could include placing the caching buttons/icons in the body of the node or on some kind of side panel.

Header Coloring

When cached, a node’s header changes color to green or red depending on whether the cache is clean or dirty.

enabled_cache_icons

dirty_header

This breaks the convention that node header color is used to indicate type. As discussed in the implementation issues section, I was limited by the current node UI code in this regard.

An alternative idea might be to do something more in line with muted nodes by indicating the node as “frozen” or “cached” using a colored outline and applying a tint/alpha to the node body.

Socket & Link Coloring

To give the user a visual understanding of the state of caching in the graph, I also opted to color sockets and links with 3 states:

  • Clean input sockets and links are colored green
  • Dirty input sockets and links are colored red
  • Cached output sockets and links are colored blue

The state of cached links is propagated throughout the graph every time something changes upstream from a cached node. This makes it easy for users to see what data has changed since the node was cached.

cache_status_propagation

In the case where a given link or socket is associated with multiple downstream nodes, the dirty (red) status will always take priority over the clean status (green).

two_downstream_caches

The main issue with this design is that it once again breaks the UI conventions. Link and socket color are typically used to indicate type and also red is already used for links between incompatible sockets and muted links.

One alternative idea might be to change a link’s style instead of color, similar to how it works for muted and field connections:

This is not particularly clear in larger graphs though and it doesn’t solve the issue for socket colors.

Another idea might be to outline the links and sockets or potentially tint their existing color.

Problematic Areas / Implementation Issues

UI

As discussed in the section above, the UI design in this prototype is very rough. To achieve any of the alternative ideas mentioned, a more complete design proposal would need the ability to express more information within node trees then what is currently possible. This would likely require larger / more drastic changes to the general node drawing and UI code, as well as potentially significant changes to the current style of Blender node graphs. See node_draw.cc.

Colors

The only components of a node that can have their color changed programmatically are the header, sockets and links. The body color is set by the user and the text and other UI is colored based on Blender theme settings for icons, text, etc.

To avoid making any large changes, I chose to simply color the header, sockets and links in the prototype.

Implementation

In this prototype, cache status is stored on the bNodeSocket DNA structure. Specifically, the variables is_cached_count and is_dirty.

is_cached_count tracks the number of paths from that socket to downstream cached nodes and is_dirty is simply a flag saying whether the socket is dirty or not.

These variables are only ever set on input sockets because output sockets can have multiple links connected to one socket. The reason they are not stored on links is because we need to be able to store when sockets with no link are dirty. This can happen if the user changed the socket value using the UI of the node.

However, multi-input sockets can have multiple links to one input socket and do not work correctly in this prototype.

Node Groups

Node group nodes cannot be cached in this prototype. (This is different to caching nodes within node groups which does work, discussed further down.)

If this were possible, it would provide much more control for users by enabling them to cache arbitrary sections of their geometry nodes graphs.

The reason this does not work is primarily an issue with the current Geometry Nodes Evaluator. The evaluator flattens the entire node tree before evaluation and this makes it difficult to determine where specific node group input and output data/sockets are in the tree for caching. It may have been possible to make this work but it would have been fairly messy and would benefit from some modifications to the evaluator. The Geometry Nodes Evaluator 3.0 looks like it will solve this as it mentions removing the need of inlining all nodegroups.

Storing the Cached Data

This is by far the biggest issue with implementing caching.

You would expect that when a node is cached, the cache can be stored on the bNode or bNodeSocket DNA structure (i.e on the node). However, the issue is that a given instance of a bNode is non unique (and by extension bNodeSocket’s have the same problem).

For example, if we create a tree with two duplicate node groups, it will now contain two instances of the same node:

duplicate_bnode1

The same situation can be found when you give two separate objects the same geometry node tree:

duplicate_bnode2

In both of these cases there are now two instances of the same transform bNode.

If we were to try caching the transform node, which geometry data would it store?

On the modifier

To solve this issue, I chose to store the cached data on the geometry nodes modifier instance. This is similar to how the logger works (see NOD_geometry_nodes_eval_log.hh) except with some differences.

The modifier stores one big cache structure which is essentially a mapping from sockets to caches. bNode’s and bNodeSocket’s are unique for a given tree level and so to uniquely identify sockets in the current modifier node tree hierarchy, it stores it’s own tree hierarchy consisting of TreeCache, NodeCache and SocketCache’s. (see NOD_geometry_nodes_cache.hh)

With this working, different cache data can then be stored for each instance of a geometry nodes modifier. This works well in a lot of cases. However, it also leads to some very unintuitive and broken situations, explained in the next subsections.

Multiple Objects with the same Geometry Nodes Tree

With the cache being stored on the modifier, this scenario does work but it is not very intuitive and the cache status UI does not work correctly. The reason for which is discussed under the “Disconnect between UI and cached data” section.

different_objects_same_tree

As you can see, enabling caching enables it for all instances of the node tree. Therefore, when the same node tree is added to the suzanne it only then caches the data. However, refreshing the cache will only refresh the singular cache. There are also many issues to do with invalidating cached data in these situations which are not solved in the prototype. For example, if you disable caching on a node in the tree, it will only clear the cache on the currently selected modifier.

Nodes within Node Groups

Caching nodes within node groups works, but it is also not intuitive.

cached_nodegroup1

Here you can see I enable caching on the “Mesh to Points” node within the node group. Remember that the evaluator flattens the entire tree before evaluation. Therefore, the “Mesh to Points” node is the last node before the output and so when I change either of the transform nodes nothing happens. This case works as you would expect, the main issue is that the colored links and sockets UI is missing.

However, what if you try to duplicate a node group which has caching enabled for a node?:

duplicate_nodegroups

In this GIF I create a node group out of a transform node and then click to enable caching on the transform node within the nodegroup. I then duplicate the nodegroup and place it downstream in the graph.

In this case, the cache for the duplicate instance of the node group is only stored when it gets dropped into the tree. Then both caches are cleared when caching is disabled for the transform node. This is not intuitive at all.

The reason that the colored links and sockets UI is missing for node groups is because of the fact that bNodeSocket’s are non-unique across node groups and the UI status is stored on the bNodeSocket’s.

Disconnect between UI and Cached Data

Both of the above subsection issues stem from two places.

Firstly, the ability to use a single node tree data block in multiple places.

Secondly, the fact that the data for storing and controlling cache state for the UI is stored in the DNA structures like bNode (see use_cache flag), bNodeSocket, etc which is disconnected from the actual state of the cache stored on the modifier.

This leads to the clean vs dirty status of the tree being incorrect in situations like this:

different_objects_same_tree

Potential Solution

A potential solution to this issue of a “disconnect” would be to store the UI status on the modifier as well. From reading the spreadsheet code, it seems possible to retrieve the data from the currently selected modifier and draw a different UI depending on it.

This would ensure that the UI is always directly synced with the current state of the cache. It would also allow for separate UI for duplicate instances of the same nodes/trees.

Capture Attribute Node

The capture attribute node does not work correctly with caching in this prototype.

capture_attribute

As you can see in this GIF, when the “Curve to Mesh” node is cached the “Delete Geometry” node stops working correctly. This is because the selection field evaluation becomes incorrect as it relies on an attribute captured from the curve circle.

As a user you might expect this to work fine, because you may think that “Capture Attribute” does not affect the geometry passed through it. This had me confused for quite a while.

However, it turns out that capture attribute actually stores a new attribute on the geometry passed through it and then it passes a reference to this attribute out on the “Attribute” output. The cached geometry does have the new attribute, however when the tree is re-evaluated (as I change the math node) the capture attribute creates a new attribute reference and outputs it. As a result, the reference does not match up with the cached geometry’s attribute and it fails to evaluate the field correctly.

I didn’t spend enough time digging into this to see if it could be fixed but it is definitely a major bug.

Future

Apologies for such a massive post, a big thank you if you made it this far!

I am finishing my internship at Unity this week, however I plan to engage in discussion under this post and may be interested to contribute personally in the future.

That being said, a dedicated team at Unity is investigating potential work around this topic and are interested in potentially aligning on a future design proposal.

I look forward to hearing everyone’s thoughts on caching in geometry nodes and the various designs/issues discussed here! If you have any alternative ideas please share them.

68 Likes

A very big offer. The main problem with this approach is that the user is explicitly involved in managing the cache. Extra information (most of the new UI) should not be used. All caching should be completely hidden from the user and run on the same update system as the current node system.

Also, using sockets as a way to store insortion seems like a reasonable experiment. But since this is only needed for geometry nodes, I think it should be allocated somehow, although I’m not sure where.

I’m not entirely sure how fields and attributes are cached (on their own or saved to the geometry)

1 Like

Oh woaw, this is a very impressive work :grinning:
thanks for your contribution!
a cache system is indeed very needed

Ultimately, cache needs to happen only at certain save point,
having a cache option on absolutely every geometry node seems overkill, isn’t it?
if the user needs to cache, he can add a new cache node at the chosen checkpoint and that would be it

and is a bad approach when working with large node graphs.

“it’s bad” is not an argument
why is it bad from his point of view?

looks quite the opposite IMO, users would need to scan all of their geonode headers in order to find the caching point(s), while with a cache node you simply have to search for cache node(s)

10 Likes

Great to see some work on this!

While I like the idea of being able to enable caching on each node without a separate cache node - I don’t think caching should require manual refreshes by default.

Ideally, updating a cached node’s dependencies should just update the cache, rather than having no immediate effect. This also eliminates the need for special UI to indicate out of date caches etc.
An option or hotkey to temporarily pause node tree updates may be useful but isn’t required here initially IMO

Additionally, I think an automatic implicit cache based on node timings may be more intuitive on top of manually enabling and disabling caching in node headers. Keeping track of the top N most time-consuming nodes in a tree and caching those automatically would make sense I think.

7 Likes

Thanks for pushing this forward!

Even though I am by no means experienced in the topic, There are some aspects that you didn’t include in your post.

While designing the geo node caching system, I think it is relevant to consider two additional future projects.

The first project is the Shader Caching which has been discussed in detail in the Texture Node’s discussion.
The design of those systems should resemble (as much as possible) IMO.

The second project is the Edit Mode node which AFAIK means that there would be a node that will cach the current geo node state and will additionally allow to edit/sculpt it as a regular mesh. (Maybe even editing values in the spreadsheets?)

Another thing to consider is the difference between saving to disk and caching. Although similar, those systems have different use cases, and they may be linked together or not. Of course, this kind of thing may change between caching systems. For example, the requirement and use-cases for caching a texture (rasterization) or saving it to disk (baking) may differ from their parallel options in geometry.

1 Like

Other software has tried explicit cache nodes in the past and you just end up with hundreds of them in the tree. Better it just be an option on a node so you can cache right there. If you want a “cache node” then just use a join geometry node and cache it and change the label so it’s searchable.

I believe checkpoints have been discussed before but I am definitely more excited for a per node caching option.

13 Likes

Any discussion of explicit caching is meaningless. All nodes can be implicitly cached. You will waste a lot of time with any caching if it is explicit. If these are nodes, then you will make hundreds of duplicate nodes, which is also not very pleasant. Implicit caching in the computation tree. The user should not know about it at all until the increase in speed is visible.

1 Like

And they stick with it, perhaps because it works well? :thinking:

Not sure how this is an argument, as it also apply for the header implementation
you might not need to “add a new cache node” but you’ll still need to manage hundreds of cache both ways. Are you sure that these users who always deal with hundreds of caches are representative of most users? :thinking: I rarely see such behavior in Houdini for example

A header interface might also get significantly more complex when we’ll need more advanced caching functionality in the future, such as saving caches on disk/blends, managing multiple cache per node, or loading cache, ect


Where would you store your automatic caching?
And what about managing memory size/ blend file size?

Not sure how you could save a 10gb sim implicitly

3 Likes

Indeed, you opened my eyes, I did not think that full caching looks expensive. I would give my choice to an explicit node. But the option where a bunch of nodes can duplicate cache ownership due to the fact that the authors of the groups added the cache in an uncoordinated way also hurts.

Thinking

The amount of memory is now equal to the result.
When calculating, memory size = Maximum memory size of a node (single)
When parallelizing calculations, which is in the plans, this will be the amount of memory for one node in each thread.

Disk storage is clearly not what it should be, cache should depend on blender version and node implementation/availability.

The same applies to shaders, caching glsl runtime looks strange

3 Likes

I might agree with this. I will have to mull it over for a while yet though.

This slightly reminds of when geo nodes were first coming out and Pablo was thinking of having a design more similar to houdinis implementation where on each node you would have a viewer flag to show the output in the viewport. Inevitably the devs chose to go with an approach more similar to sverchoks, with a view node/output node sort of idea. They both work just fine, but have different ups and downs. With view flags on each node it complicates the UI more, but it convenient, while the approach we use now allows for more options to be added to the output node/viewer node (though those havent been added yet).

I think for this conversation about caching Id rather have the sverchok like implentation, aka a dedicated cache node because I can honestly see having more options as a benefit. For instance with the current design could I store the cache outside of the blend file? It seems no perhaps, though Im not sure.

I feel alembic/USD caching and the likes are always needed for cross software needs. I would love to make a change in my geo nodes tree and have it cache to an alembic cache node and have that automatically picked up in another software with its own read alembic node for example.

I suppose I could see it being nice to have a built in system like this for its sheer convenience, as its only a click away, but I feel it should be in conjunction with a dedicated node for cross software, or when more specific caching options would be needed. So that would make it almost entirely for performance reasons when you dont need to write out something to a file, simulations etc.

As well, I think currently the design creates too many UI complications that are breaking conventions, and if I can have a friendly laugh seem a bit ugly color wise, but that can be fixed no doubt.

Edit: I’m borrowing a French keyboard atm and it’s done unspeakable things to the format of this post, sorry about that, no idea how to fix it.

2 Likes

really nice idea / implementation :smiley:

1 Like

Nice to see more geo node experiment that goes toward somulation and caching. ( video by @LukasTonne )

I think you guys are not making a difference between freezing and caching, what the author made is just a feezing more than a caching imo, simply because its still stored in the blend file, while caching you always need the option to do it on disk, the option of the format, the option of the time frames etc


While freezing can just be a button on a node, caching need a lot more options

4 Likes

Whoah, that’s some crazy stuff.

So anyway here’s a thought i’m having about the UI part

My idea on decluttering the UI a bit would be this:

A similar approach to how the node timings are displayed.

Under normal use circumstances nothing would be shown, but once you either hover over a node or click it a small button would show up the same way the timings are displayed.
(the display of the window, can be controlled as a extra button in the node display options, also a settings menu option if it’s a hover over or a click on window to display)


First window with it’s overlay displayed (hover or click to select window), Second window when a cache is present on the node but is not selected or hover over, third window how the overlay is controlled

Doing it this way could also allow you to control the “error colour” without changing the colour of the main window itself, by changing the colour of the overlay window or something along those lines.

And honestly i like the idea that caching is part of the UI itself, rather than a cache node, i really don’t need more nodes than i already have.
The idea should be to lessen the node clutter and not to add more to it. :slight_smile:

5 Likes

Great demonstration, i like the polish and thought put into UI. Not much to add to the UI/UX discussion, but a few thoughts on related topics:

  • Memory requirements will probably become an important aspect soon. So far there is only a “cache” for each object’s final geometry (Object.runtime.geometry_set_eval), so overall number of caches remains manageable. With node caching this could balloon quickly, so some tools to manage caches could be useful. List all cached nodes, sort by usefulness (amount of computation time saved), etc. Could be done mostly as a python addon with a little bit of RNA support.
  • That’s also one reason why automatic caching is difficult. Could be tested first as part of the same python addon for caching utils. Perhaps an overall memory budget to avoid nasty surprises.
  • How well does depsgraph invalidation of input data work? I suspect that using two object inputs will invalidate all the caches every time one of them is updated, because there is no connection between e.g. Object Info nodes and the despgraph to tell which input data actually changed.
  • "Timeline" caches (simulation cache) are a separate feature. It’s not just useful for simulation but also playback performance. For timeline caching a dedicated node is acceptable IMO since there will rarely be more than one point of loading/storing cache data. Memory requirements even more of an issue, needs disk storage and selective caching (e.g. store topology once plus deform layer per frame)
5 Likes

Ideally, updating a cached node’s dependencies should just update the cache, rather than having no immediate effect. This also eliminates the need for special UI to indicate out of date caches etc.
An option or hotkey to temporarily pause node tree updates may be useful but isn’t required here initially IMO

I may be misunderstanding exactly what is meant by this, but one of the reasons we wanted to add the manual refresh is for cases when you have a large tree that takes a lot of time to process.

If a user wants to update a bunch of nodes upstream of a cached node that takes a lot of time to process, each time the user updates those nodes they have to wait for the cached node to reprocess. So if a tree takes 60 seconds to evaluate, I update node 1 then wait 60s, I update node 2 then wait 60s, etc etc. So it’s a really time consuming process and completely interrupts the user’s workflow.

If I can manually cache and refresh a node I tell the node to cache its results, make my upstream changes, and then refresh the cache to evaluate the tree and only wait 60s once.

But I may be misunderstanding the expectation on how that would work.

I think you guys are not making a difference between freezing and caching, what the author made is just a feezing more than a caching imo, simply because its still stored in the blend file, while caching you always need the option to do it on disk, the option of the format, the option of the time frames etc


That is an interesting point that we hadn’t considered. I guess for the internal users we were working and iterating on this with, that was never raised as something necessary. They were more than happy to just be able to “freeze” slow sections of the tree, and it would massively improve their productivity when finished with a slow section but needed to do further work downstream. Then when they closed and re-opened the project they wouldn’t care about re-running the entire tree once to re-“freeze” everything, because it still saved them so much time.

11 Likes

Hi Unity team,
thanks a lot for this contribution

Coming from houdini, the caching system as presented above is unfortunately not production proof. Because a caching system need to be flexible & universal: it needs to able to work on small selection like demonstrated above with the erindale project, but also on larger vfx where many millions of elements need to be baked, especially considering the fact that the geometry node team want to add simulation nodes in the future.

A cache can hold animation & hold GB’s of data! having the option to save some kind of cache files on disk is very important, and it seems that it was entirely omitted in this proposal :sweat_smile:. Users need to be able to see the memory used by the cache, and manage it!

Also, did you knew that blender already has a cache datablock that supports animation & can be saved on disk? perhaps this part of blender can be improved & adapted to geometry node?

I believe the perfect cache node would by default automatically create & write a new cache datablock, users could switch datablock if needed, or save the block on disk (as .abc perhaps?)

Cheers

3 Likes

I agree, this won’t probably work with production scenes with tons of geometry and other manipulations.

We need a dedicated Cache node where we can define what kind of cache do we want, we may want a freeze type cache, to pre-compute part of the tree on the fly and it can be refreshed when something changes, which will automatically disable the cache ad allow you to see the outcome of the modiications in real time, like it works right now, no need to trigger the cache again, or maybe we want a disk cache to pre-compute that part of the node tree to disk so it goes o from that point on, and the previous nodes cannot be modified if you want the cache to keep working, or it will be invalidated, similar to the preivous version from a user perspective but it loads the information from files.

We may also want to cache out to disk or to freeze just part of the node tree in the time, like for example just frames from 50 to 140 because they have some complex operation that takes ages, but we want to keep the rest of the tree live.

Also you may need to decide what type of cache, it’s ABC, USD, VDB or some other type?

Also that cache node could and should become a data source so it can load an alembic file from the ground up without any previous node tree and use it as a starting point.

In the same way a cache system should be ready to get time modifications, like accessing information in a different moment in time in a transparent way to the user (this would require additional nodes of course)

But having a cache system implemented inside every node, it could seem cool, but it will be out of control pretty soon and it won’t work well with big enough and complex node trees, on the other side, a cache node can implement several solutions in one and it will be crystal clear where is it located, what is cached,and what not, etc


Also I really dislike the idea of having to access it with a mouse gesture, something counter-intuitive and not present in any other part of blender as the only solution.

Thanks for the work, of course, and this is useful because no doubt it will help outlining how the cache system should work, I don’t really like this first approach, but it’s a first approach :slight_smile:

13 Likes

Is a good idea but i’m not fan of the icon in the corner, i thinks the system need a cache node like your system, but from the bottom of the node like that :

1 Like

Fantastic points. I was imagining this caching scheme expanding to generic trees (shaders, compositing etc) it has an interesting overlap with baking textures as well


If it’s a separate node, the graph node counts will ~double as users start inserting them everywhere. I’ve been involved in productions that did this and it caused a spaghetti explosion of chaos.

In my experience, I would definitely not do it with a separate cache node and should be a core part of each node to keep graph sizes small. here be dragons.

if per node is too onerous, another alternative idea would be to make the caching behavior part of node groups.

5 Likes