2022-10-31 Geometry Nodes Post BCON22 Workshop

Post BCON22 Workshop

This workshop happened during October 31 to November 3, 2022 at the Blender head quarters in Amsterdam.


  • Dalai Felinto
  • Hans Goudey
  • Jacques Lucke
  • Simon Thommes


  • Spreadsheet Editing
  • Import Nodes
  • Procedural Hair Workflows
  • Attribute Editing
  • Dynamic Socket Type
  • Dynamic Socket Count
  • Simulation
  • Automatic Caching
  • Freeze Caching
  • Geometry Object
  • Usability
  • Node UI
  • Math Nodes
  • Preview Node
  • Menu Switch / Enum
  • Comments
  • Loops

Spreadsheet Editing

  • Selection
    • Only in modes that visualize the selection in the viewport (edit mode, weight paint, etc.)
    • Sync selection with viewport
    • Selection is displayed as a color
  • Active
    • Display the active element with the active color
  • Data
    • Editable in all modes
    • Easy to tell what original means if you can always edit it
  • Display
    • In original mode, display derived data grayed out (normals, more in the future)

Import Nodes

  • String socket file path subtype
  • Alembic
    • File string input
    • (Optional) object path
    • Frame input
    • Geometry output
      • New instance type (Alembic instance)
    • Validate meshes
  • OBJ
    • Validate meshes input
  • STL
  • USD
    • More options
  • CSV
    • Creates a point clouds
    • Also creates an instance because…

Procedural Hair Workflows

  • Two basic questions
    • What new building blocks are needed to build the high level groups?
    • What high level group do we use to replace the building block?
  • Guide curve workflow
    • Like children in the particle system
    • Technically possible in the current system but very limited
      • Interpolating positions is possible, but affecting all attributes is impossible currently.
    • The density brush is similar to the children creation
  • Sample curve node (already finished)
  • “Ignore self” in proximity sampling
    • Needed generally but also sounds useful here
    • Something like “Use 2nd closest point”
  • UVs and surface object should be more easily accessible
  • High level groups
    • Clump
    • Scatter/Children
    • Frizz
    • Curl
  • Group index idea is important for interpolation
    • Parting and clumping should be index/id based
  • Distribution on the undeformed source mesh
    • Accessing the original geometry may be important
    • The rest positition attribute gives more control
  • Creation of curves inside a “tube” mesh
    • The algorithm isn’t clear yet
    • The topology would be a tube but the deformation could be arbitrary
    • Maybe the ring index could be an attribute to make the topology clearer

Attribute Editing

  • Mesh, curves, and point clouds
  • Edit mode
    • Use case is as an input for procedural generators
    • Miscellaneous features like randomize attributes, etc.
      • Operators act on the active attribute
    • All attributes are displayed for the active element in the viewport sidebar “Attributes” panel
    • For multiple selection
      • The buttons/sliders “offset” the mean value
      • Operator to set all selected elements to a value (prefilled with active value)
    • Attribute overlay for active attribute
      • Drawing edge attributes
        • Thick wireframe drawing like seams, bevel weights, etc.
    • Gizmos to edit original data (for later)
      • Checkboxes for booleans
      • For vectors: directions, offsets, etc.
      • Rotation gizmos
      • Connects to the node tools project a bit
  • Attribute paint modes
    • Artistic paint mode
      • Supports meshes
      • Point and corner domain and color types and float attributes/textures
      • Sculpt color implementation should apply to any attribute type (within reason)
    • Technical paint mode
      • Supports meshes, curves, points
      • Supports any domain and any type with special tools for some types, also supports textures
      • In between edit mode and the more artistic painting currently in sculpt mode
      • Choose a value of any type and paint it directly
      • More advanced painting
        • Flow maps
        • Face set interaction like sculpt mode
    • An option on the attribute chooses whether it shows in the more artistic mode
    • Paint canvas
    • Texture canvas
      • Choose between a list of all textures associated with an object
        • The canvas can be sourced from (1) material + layer stacks; (2) modifiers; (3) attributes
        • The source is used for filtering and visual clues
      • Embrace syncing the active texture between the node editor and the paint modes
      • Make setting the active texture canvas active an explicit action
      • UV is set per layer stack, or sourced automatically from the node texture
      • Multi-object texturing
        • Painting on the same texture across multiple objects
        • Bleeding should be smart about other users of the same texture

Dynamic Socket Type

  • Initial type is “Unassigned”, no values are shown for the dynamic sockets.
  • Dynamic sockets can be chained, preserving its unassigned type.
  • The first time a type is connected to a socket, the type is set for good
  • To set it to a different type go to node options or context menu
  • To reset to Unassigned, first disconnect the sockets then go to node options or context menu
  • Don’t show the type option in the node by default
    • Have it in the node options
    • Optionally also in the context menu
  • Dynamic sockets are ignored in the modifier UI
  • It needs a design for how it looks when unassigned (e.g. rainbow)
  • When assigned they look like regular sockets and maybe when assigned (so users know they are dynamic)

Dynamic Socket Count

  • Designed to allow encapsulation of complex behavior in a flexible way
  • Sockets are grouped together in clusters
  • The last entry in the cluster is like the group input/output sockets and makes a new connection
  • “Udon” noodles pass multiple sockets at the same time
    • It’s basically a list of sockets of different types
  • Udons can be exposed to the group output as one of the inputs to the cluster
  • Corresponding inputs and outputs are aligned
    • Builtin nodes like capture, sampling, raycast, etc.
    • For group nodes, the alignment would happen whenever Blender can figure out that the mapping is clear
  • Join node
    • Joining attributes currently requires the hack of adding them later
    • Option 1
      • Cluster socket of multi-input sockets
      • Requires ugliness for matching the order between multi-input sockets
    • Option 2
      • Cluster of clusters that each contain the same socket types
    • Option 3
      • Join in a separate node
    • Currently unresolved
  • Multi-input socket
    • Technically redundant now, though more elegant on a UI level
    • The join node might have to move away from multi-input sockets


  • Physics vs. simulation
    • Simulation is mostly about retrieving data from the previous frame
    • Physics is a subset of simulation, with specific solvers for physical phenonema
  • Global vs. local
    • Local is a single object
    • If we start with a local case, it should extend to the global case without completely changing the design
  • Local simulation design
    • Simulation input and output nodes
    • People do so much with the currently available tools that just giving access to the previous frame’s data would make so much possible
    • The design could be similar to the repeat loop design, but we need nested loops, not necessarily nested simulations.
    • Simulation input and output nodes control the simulation
    • Restricting simulation input and outputs
      • There is no reason to restrict inputs
      • For inputs that are only evaluated on the first frame, the simulation input node is used.
      • Outputs can only be connected through the simulation output, for improving the subframe workflow
    • Time
      • Simulation input node has an output for the time delta since the last evaluation
      • Scene time
        • Time of the animation system, tied to the scene
      • Simulation time
        • For the interactive mode, separate from the scene time
        • Specific per object
        • If you only need the delta, the absolute value isn’t necessary
    • Simulation input node
      • Inputs
        • All inputs are evaluated only once at the start of the simulation
        • “Run” (True by default)
      • Outputs
        • All inputs but “Run”
        • Delta Simulation Time
        • Elapsed Simulation Time
    • Simulation output node
      • Inputs
        • All inputs but “Run”
        • A “Stop” value that breaks the simulation until the simulation clock is restarted
      • Output
        • All inputs but “Run”
        • “Simulation started”
        • “Simulation ended”
        • Elapsed Simulation time
    • Run and stop
      • The simulation still caches the final step on which the stop was set
      • The simulation always pass the final cached step
        * When the “Run” hasn’t been on yet, the simulation pass the input data through
      • When the “Stop” is on, the simulation pass the final cached step
    • Cache
      • We cache on modifier similar to existing physics systems
      • We may version it to move the cache to a simulation datablock later
      • The caching itself (its data format) will focus on performance
        • It may even use .blend files for that
      • Because subframe mixing requires mixing between cached frames, it is impossible to pass links directly out of the simulation without connecting to the simulation output node.
    • Frame UI
      • The goal is to make it clear that links cannot pass out of the simulation, but they can pass in.
      • All simulation nodes go inside of a frame
      • On each side of the frame there are sliding input/output nodes with the simulation state sockets

Automatic Caching

  • Anonymous attributes
    • Outputs can’t depend on whether other outputs are needed
    • Higher level way of determining when anonymous attributes should be propagated
    • Anonymous attribute names derived from the compute context
    • Force the deletion of anonymous attributes when leaving geometry nodes
  • Heuristic for what to cache
    • One idea is that each thread would cache arbitrary data after a certain amount of time passed
    • A simpler node-based idea is to cache a node’s output when it took a certain amount of time
    • Finding a good heuristic will require more testing
  • Detecting changed inputs
    • Compute a “proxy value” for each input, which is like a hash. Generally they shouldn’t change when the data has changed, but it’s okay when they do.
    • Propagate the proxy values through the node tree, mixing them together to determine when they changed from the cached value
  • Next steps
    • Anonymous attribute static analysis, making anonymous attribute handing more functional

Freeze Caching

  • Also requires anonymous attribute static analysis for improved usability
  • Without the “edit mode node” functionality, the feature is relatively simple
  • When geometry is frozen, it is saved in the file by the modifier
    • Requires saving and loading non-main meshes
  • Anonymous attribute handling
    • The node tree automatically freezes fields used with the frozen geometry as necessary
    • Those fields are automatically populated in the checkpoint node and connected to their original nodes
  • The node has datablock slots for the different domains
  • The datablocks inside a nodetree are exposed in the modifier “Internal Depedencies” list
  • This node is used for the primitive objects (e.g., Cube)


Geometry Object

  • Related to Freeze Caching
  • All the geometry-related types are unified as a " Geometry" type
  • In the object panel there is a list of geometry slots
  • The list is read-only, based on the node-trees
  • Only one geometry can be active, and this is used to determine the available modes and panels (e.g., UV)




  • Gesture operator to create capture attribute node like the mute or cut operators
    • If a geometry link is included, it is used, otherwise there is no geometry connected
    • In the toolbar and possibly as a shortcut
  • Field connection visualization
    • While dragging a field, differentiate sockets that lead to an evaluation in a non-compatible context (don’t contain all necessary attributes)
    • Graying out sockets and links is preferrable over highlighting since many sockets would be highlighted
  • Field evaluation context inspection/visualization
    • In tooltips for field sockets, show information for evaluation contexts
      • Domain and the kind of context
    • Field input warnings
    • Vizualizing the context
      • Visualize the nodes that the field is evaluated in (and optionally the geometry socket)
      • Visualize the path to those nodes and sockets
      • Triggered explicitly by an operator or socket selection or delayed on a hover
      • Possibly toggled as an overlay option
    • Visualizing evaluation on sample nodes
      • Node panels can give a visual suggestion for many nodes
        • Organized in the node group sockets list as a tree
        • No header or dropdown in many cases, just a common background color or separate to group related sockets
  • Interpolate domain naming
    • It sounds like it’s interpolating to the domain that’s selected but that’s not what it’s doing
    • It used to be called “Field on Domain” which was clearer. “Evaluate on Domain” is better.
  • Add menu
    • Just start typing to open the search
    • Add multiple ways to access nodes with different properties in search
    • The add menu should use submenus more
    • Focus on searching

Node UI

  • Subpanels
    • Rational is keeping direct/immersive interaction but providing better grouping and hiding away unnecessary sockets
    • The idea is to give users the framework Blender uses, with constraints if needed
    • Subpanels can be used for outputs and inputs
    • By default they can be expanded or collapsed
    • Subpanels are organized by a tree view in the sidebar
    • If a subpanel is closed but a socket is linked, it is still displayed but with no text
    • We need to collect use cases to map out the final solution
  • Hiding sockets
    • Rational is clearing up the UI to the point where you have exactly the parameters exposed that you want to control
    • When every socket in a panel is hidden, the panel is hidden as well
    • Add a bar to the bottom of sockets to roll/unroll the node

Math Nodes

  • Split up the math node a bit. The ideal segmentation is arbitrary.
    • We don’t want to use a node per operation because that makes switching more complicated, but the current math node has too many responsibilities.
  • Math node
    • Does generic math operations
    • Uses dynamic types
  • Vector math
    • Only does operations that make sense on vectors
  • Compare node
    • Dynamic type, generic or type-specific comparison operations
  • Needs more discussion

Preview Node

  • Currently the evaluator always calculates the output even if it isn’t visible in the UI
  • The depsgraph doesn’t know about the UI currently, but it should be able to skip evaluation when the result isn’t visible
  • Take UI visibility into account for the depsgraph
  • A more manual solution is “pausing” evaluation, which is related to cancelling
    • A patch currently works on Linux, but the Blender needs to be okay when the evaluated state doesn’t match the expectation

Menu Switch / Enum

  • Requires dynamic socket type
  • Design focuses on an “Menu Switch” node which is used to define the enum options in the sample place it switches
  • Has an enum selector and inputs for each option where the names are defined
  • Adding another enum is part of the node, it is not an udon
  • Sockets are called “Menu sockets”
  • To add and name the sockets users go to the node options



  • Comment node
  • Options for transparency
  • Edit the text directly in the node editor
  • It might be helpful to attach it to nodes
  • Resizable in all directions


  • Serial Loops (Repeat Loops)

    • Very similar to the simulation design
    • Max iteration count input, current iteration is passed to the inside
    • “Stop” output on the inside to allow stopping iteration early, also outputted to the outside.
    • Debug iteration index in the UI for socket inspection and viewer nodes
    • It may make sense to add a switch to disable logging completely
  • Parallel Loops

    • Very similar to the serial loop
    • Two modes: count; elements
    • Count:
    • Elements:
      • The geometry is separated into groups
      • Option: Domain
      • Input Input:
        • Geometry
        • Geometry ID (fallback Index)
        • Selection
      • Input Output:
        • Loop Index
        • Loop Geometry
      • Output Input/Output: Geometry

Interesting read, how is the priorities for those projects ?

That was very wide-reaching. Fascinating stuff. Is it alright to ask a couple questions, share a couple thoughts ? so that the blog post may hopefully clarify them.

So far, I haven’t seen an explanation of what the interactive mode was really meant to be. Is it a way to drive simulations using different input devices and record/cache them as they play out ? Is it a logic system ?

Usually in a game logic main loop you would multiply every time-dependent property by the delta time, so as to account for variable/unpredictable frame rates. I suppose Blender will also need this if the simulations are played in the so-called “interactive mode”, but in regular caching scenarios the delta would be constant throughout the timeline.

Nested simulations ? food for thought…

With the advent of geometry nodes, the object types separation has started to feel arbitrary. I’m all for this.

Does that mean entering edit mode on a geometry object containing different geometry types will only allow the user to interact with a single type at a time ?

Other nodes have been ordered by usage and are “type-agnostic” so to say : map range has a float and a vector mode, so do compare, random value, and possibly others. Why not do this with math and vector math ? Additionally some of the comparison operators in math could go to compare instead.

Do you mean whether or not the output is visible within the view frustum ? or just whether or not there’s a 3D viewport in the current layout ?

I’ve been thinking about this. Is there actually any difference at all ?


Concerning simulation and solvers. Would there be any merit in looking at NVIDIAs Physx framework now that its opensource and under BSD3 license?

It seems (from the outside) that finding developers with experience in simulation technology and algorithms is far and few between. Would teaming up with NVIDIA be worth thinking about ?
Assuming it can run on any hardware, not just NVIDIA.

Not a request, just curious about it from a dev pov.


I think the difference between the two is:
Repeat Loops, run a set of instruction at a single point. The output is one frame of data.

On the other hand simulations do a “loop” but for every iteration it resume a frame and stores the data to the frame. So the output would be a set of data on a frame range. It would also maybe just caluclate if the user advances the timeline by a frame

It seem that PhysX 5 has some CUDA only feature, like PBD system and soft bodies.


As far as I know, it’s basically the first thing, but maybe the full vision includes some bits of logic. The most basic thing is just another source of time besides the animation frame, to do simple things like scatter a bunch of points with physics at a single frame of the animation. Our first implementation would probably just be that, a new time source and some UI to control it.

Good point, that’s an interesting thing to keep in mind for interactive simulations. I’d imagine that would be a “time control” option added a bit later after we have the basics of the time source working.

I’m not sure nested simulations make sense, given the need for time progress between each simulation evaluation. Loops inside of simulations definitely make sense though!

Yes. The reality is that most edit modes and tools in Blender are designed for just a single geometry type, and changing that might be prohibitively complicated. And you only have a single geometry active at a time, which is just a single type.

That’s what this section is supposed to mean, it’s a little terse though. The “Vector Math” node would be operations that only make sense on vectors, for example.

That’s more like the latter-- “Don’t evaluate the object when it isn’t visible in any viewport because of various hiding options”

The first phase of the simulation project wouldn’t need any specific solver implementation, since it’s just about hooking up the existing geometry nodes and using them to implement simulations. And maybe adding a few nodes to fill in the gaps.

After that, it would be good to look into using existing libraries as black-box solvers for more specific use cases that can’t/shouldn’t use a node-based implementation. It’s nice to have a library that can run on the GPU, but probably not always necessary. It shouldn’t rely on vendor-specific features though, which seems to be the case for PhysX.


A very exciting list indeed! The import nodes sound intriguing and begs the question of the possibility of export nodes. Has there been any discussion on this?

Combining loops, caching and exporting could yield some very interesting asset pipelines for gamedev. I guess the network would have to be aware of when the output node has finished “cooking” before exporting. You can already do this with a geonodes/Python combo, but having an all nodal solution would be very nice.

For exporting, we’re more likely to rely on T68933: Collections for Import/Export. That should give more control about when exporting actually happens. Generally import is much clearer than export in a node tree, since export confuses the idea of having a single output location. Maybe it will happen eventually though, not sure!


Post updated with more polished wireframes


UI-wise… I may be taking the mockups too literally, but some years back that’s how node groups looked. They were changed to the groups we have today, and that was an improvement. Then during the initial months of geometry nodes the attribute processor was imagined as basically the same thing: a frame with sockets on the sides. I don’t see it working very well in the case of loops/solvers either, it seems like a cumbersome solution to something subnetworks/nodegroups are suited for already : they’re contained, reusable, allow to expose parameters…

I guess if we want to show the contents of a nodegroup overlaid onto the parent nodetree contents, that is better formed as its own UX mechanic, applicable to all node editors : on hotkey hold, expand a nodegroup as with a magnifying glass, just to peek inside without losing context. Commit to entering the nodegroup with a click, or release spacebar and stay at the hierarchy level you were in.
If you’ve used MacOS, that would be similar to the builtin utility that lets you inspect a document by holding, what was it, spacebar?


Personally, I’m not all that in-tune with these suggestions. The hardest part for a physics solver is the enormous number of options involved and presenting them in a usable yet very flexible manner. Physics is notorious for having so many variables that it’s basically impossible to make deterministic, and adding simulation quality options further obfuscates it.

Whatever it is, the easy part will be the design for inputting and outputting data. All nodes have this and there are already style guides for it. The hard part is “what do we do with caches?”.

I do like the idea of a bisected node. It would instinctively show that there’s a separation between its input and output- that what it outputs may not be totally related to its input.

so- we have several available schools of thought, from least to most freeform:

  1. Simulations should be a kind of “uber node”- a black box with many inputs, options, and outputs, with a “bake” button and a “load cache from disk” entry. These are easier to add and integrate, even as addons, potentially.
  2. Simulations should be black boxes inside a loop node, where you can do additional per-loop operations on inputs, with the outputs able to be “frozen”
  3. Simulations should be like any other loop node, with their building blocks available to all of geometry nodes, whether the node group is hidden or exposed as part of a “mega-node”. specific solvers will be preset groups, and the results should be able to be “frozen”.
  4. Simulations should be buildable from the ground up by being able to pass information from a previous frame/sim step. it’s the user’s job to avoid making infinite loops. some more nodes will be added to enable this, as well as caching the results of loops (lots of people think it’s a bad idea because it would be so unwieldy and basically require using math to code a pyro sim)

am I understanding this right?

I’m with @Hadriscus on the node group thing. having all the inputs converted into a sub-node-group input and output make sense, though it would very much cause issues when it came to the UI for baking a sim.

This is one thing that the black box and “mega-node” do well. On the mega-node, there’s 3 distinct segments, and a black box node can easily have a bake button, a clear button, and a cache load section.

I’m not sure I’m satisfied with any of these solutions, though the pure black-box method might be my preferred one for now, as they can take into account multiple frames, unlike a naive loop implementation, where you can only look at the previous step.

You’ll also want to be able to tell the user if their inputs do not match their outputs for these. This I am a little more satisfied with, but not totally. Simply grey out the baked parts that cannot be changed, and if anything upstream from the sim changes, make the node say it’s out of date.

(perhaps a little thing like this would be better?)

of course, The whole “out of date” thing doesn’t mean much if it’s replay only- but that’s one of the issues with modern blender caches.

replays are buggy, poorly understood, and respond poorly to timeline scrubbing. There is a place for them, but it should be a check box that enables replay caching, and disables disk/file caching. Replay caching should be dumped and recalculated frame by frame during render. Given their relatively troublesome temperament, replay cache should be an option, not the default.


Anybody can elaborate on :

  • File string input
  • (Optional) object path
  • Frame input
  • Geometry output

Would that give me to possibility to import an alembic file, distribute it on a plane but have the instances with different offsets? Like fight now if you distribute, for example, a person jogging, they will all jog in perfect sync. What I need is to be able to have a random value to offset them. But you have to be able to decide how many different instances you want because if your sequence is 1000 frames long, you could end up with 1000 instances and that would be insane. The best would be able to define a starting frame, a number of frames between the offsets and how many you want. So if you are making a crowd and people are clapping hands, you could sync them all knowing that the model starts clapping at frame 53, then every 22 frames. So first instance at 53, then 75, then 97 until you reach the desired number of instances. And a random of may be a frame or two to make them look less mechanical.

I’m super hyped for this. I use Houdini day in and day out and this is what I have been waiting for. Nice going guys!

Hi BlenderBob! Love your videos!

You’re wondering about offsetting the time of each Alembic instance once they have been instanced on points? So, I imagine the question would be: Do instances carry along their input attributes? Or could Frame input be an attribute on each instance?

How I would imagine this working is you import your jogging guy with the alembic node, instance on points, and then make a “Parallel” loop. Inside that loop, bring the “Frame input” attribute in (This is the part I’m not sure about) and offset it by a random value for each element. The output of the loop should have all the instances with a different time offset.


Remember that the same geometry nodes group can be used on multiple objects with different parameters, or as a linked asset across many .blend files. For that reason the bake/cache is planned to be on the modifier that instances the geometry nodes.

The geometry nodes only describe the behavior, which may include custom or black box simulations, multiple simulations, start/stop or timestep control. But the bake button or filepath to bake to are separate from that.


Thanks @galenbeals
Also, because we are dealing with videogrammetry, the texture changes at every frame so the shader needs to offset the texture on a per instance base.

One guy offered me a really cool solution. Since it’s not possible at this point, he was offsetting the UDIMs instead on the instance. He had a working prototype. It was awesome. But when I tried on the real geometry, we realized that Blender loads ALL the UDIM textures into memory (at least in the viewport) so Blender just crashed since we have a 500 frame sequence of 4k textures. When we did the tests we were using 32x32 pixel images so we didn’t see the problem at the time.

I also need to be able to have total control on the offset. If I do a stadium and the client wants the crowd to do the wave, which is actually the case right now and I won’t be able to deliver unless we go to Houdini, then it becomes more problematic. I would record a bunch of people that would all get up at let’s say frame 250 out of 500. I would need to be able to make sure that they all get up at the right time to do the wave. So it’s not a random offset but one that is based on a specific pattern. We are not out of the woods.


Time/frame input is highly needed for the Texture node in both Geonodes and Shader editor!


I found that Cycles is more efficient with the UDIM textures. If you need to visualize them use Cycles in the viewport.

Cycles clears the memory up after you switch to regular OpenGL viewport. On the other hand the lookdev/Material Preview mode uses %30+ of more memory with such textures and it never clears the texture memory like Cycles does and most likely it will lead to crash with large UDIM textures.

We need proper .tx support in Blender for efficient high resolution texture workflows.

1 Like

I always feel the need for a preview mode other than Plane preview, such as cube, Sphere, Monkey, because Plane preview textures are not very accurate in a scene with vectors.