2023-05-15 Node Tools Workshop

Node Tools Workshop

15 to 17 May 2023, Amsterdam.

Participants:

  • Andy Goralczyk
  • Dalai Felinto
  • Jacques Lucke
  • Lukas Tönne
  • Simon Thommes

Introduction

3-day workshop at Blender HQ about adding high-level interface for assets and simulations. For example to have tools to add mushrooms, scale them and scatter them randomly.

Goal

“Create purely artistic driven tools”.

  • Tools empowering artists, not constraining.
  • Add-ons already address a lot of the use-cases, however it is too much overhead on creating them.

Tools

Design for scattering tools

  • Ship asset with the interface to use it.
  • Use-case: mushrooms, scattering

Design for realtime interaction
These cases involve a realtime capture of the user interaction (e.g., painting) which is enhanced by a geometry-nodes setup.

  • Auto-keying + puppetry
  • Auto-keying + VJing
  • e.g., sculpting + time + bake
  • e.g., paint + time + bake

The realtime use case was postponed. The initial focus is on the scattering use-case. But essentially the realtime is the simple use-case with time + bake.

Also instead of the scattering use-case, we focused on the fracture use case.

Fracture use-case

Asset that has a node-tree with a few inputs that can be edited by different tools. Those tools are to be exposed together with the node-tree.

For example, the user drags the Fracture node-tree from the Asset Browser into an object. By doing so:

  • The object gets a modifier with the fracture Geometry Nodes node-tree.
  • Three new tools are visible when this object is active.

Realtime Interaction

Layout (or animation) tools to perform interactively on top of simulation or animation playback.

  • Record keyframes or geometry (grease pencil or curves) over time.
  • Auto-Keying is a good start point (nice to have: playback speed control).
  • Needs support for arbitrary UI inputs (e.g., any property in the UI which may be animated).
  • If other inputs are supported in Blender (3d navigator, VR controllers, midi-controllers), they could be mapped to a shortcut and used as well.

Note: Grease Pencil already supports this, although it is not hooked with auto-keying or the scene playback.

Checkpoints

We talked about the latest proposals for checkpoints.

There are multiple use-cases for baking:

  • Simulation
  • Editing part of the node-tree
  • Performance
  • Re-timing (added later, not part of the following example)

It would be good to have those use-cases using the same solution under the hood and using the same concept (checkpoint). Effectively a single node-tree may have all three use-cases combined:

In this example (of the fracture use-case) the tools inputs are generating curves internally. Those curves are then used for a “Sparkle” simulation (to generate particles) and to fracture the original mesh.

After using the fracture tools to create the initial fracture system artists can:

  • Have a checkpoint after the curves to tweak them. That means the tools will no longer create new fracture islands/cracks/impacts.
  • Have a checkpoint to bake the sparkle simulation (so this can be sent for a renderfarm).
  • Have a checkpoint to bake the final fractured geometry (for performance reasons).

In some cases the different steps of a simulation may be used by different objects. For example, one object can be used to generate ocean waves from curves, while another one generates foam and water spray. In this case checkpoints could be used to output different stages of an object geometry to another node-tree.

We still need to have mockups on how the different use-cases would be exposed to users, and the user interface for the overall view of the checkpoints in a scene/object.

Checkpoints In-depth Discussion

General principles:

  • Store animated data (almost) always outside .blend.
  • Only baked data can be accessed at arbitrary frames (independent of the current frame).

Use Stories:

  • Edit static (i.e., non-animated) “procedural hair” destructively.
  • Edit simulated hair for the entire range (e.g., delete or offset).
  • Edit specific (sub-)frame baked geometry.
  • Bake simulation to send to render-farm.
  • Bake heavy “animation” (e.g., waves) for animators.
  • Re-time baked cache (e.g., bake in ones, render in twos; bullet time; re-time).

Concepts

To understand a checkpoint we first need to understand the concepts of Input node and Passthrough nodes.

Input Node
A node that brings in new data into the node-tree. E.g., an Importer node, an Editable Mesh node. or an Object Info node.

Passthrough Node
A node where the data just bypasses the node. E.g., Gizmo nodes (see below), any muted node, and technically also the Reroute element.

A checkpoint is a combination of both. It can work either as an input or passthrough depending on whether or not the node it is baked. In a way it works like a “quantum” node.

Nodes Overview

Input Nodes

  • Editable Geometry input
  • I/O Nodes (e.g., Alembic importer)
  • Import Bake

Checkpoints

  • Simulation Zones
  • Freeze

It is still not clear whether editing the freeze node is an option on top of the Freeze Node, or if it is a separate node.

The baking options and controllers for both the simulation zone and the freeze node should be accessed in a similar way. With operators inside the node editor to define whether to bake selected “checkpoints”, operators on the scene level to bake selected objects and so on.

The baked data is stored in the modifier level.

Tools and Group Inputs

Tools are to be implemented as part of the modifier inputs. Similarly to how an input can have a subtype and a range, the input could also have a default tool to be used to edit it.

Only tools supported by Blender are accessible for the Group Inputs. So if an input type requires a Flow Map editor, we first need a tool in Blender that can do that. That also means that clicking on an modifier property tool will:

  • Change the mode to the expected mode (e.g., edit mode).
  • Set the active attribute or texture to the one used by this property.
  • Switch the active tool to use the tool defined for this input.

The mapping between modifier input and tools can be done automatically in some cases (i.e., different data types can have a default tool).

Example: Landscape Asset

A lanscape asset that adds either water, tree, grass or flower.

The modifier itself would have those properties:

  • Tree-type
  • Water turbulance
  • Wind strength
  • Wind direction (gizmo)

Besides that, a single texture map (or ID attribute) would be controlling which asset type to instance. This map wouldn’t be exposed as a modifier property, but instead as four separate tools:

  • Tool: Attribute Paint
  • Name: Paint Tree / Water / Grass / Flower
  • ID: 1 / 2 / 3 / 4

image

Besides that we could support high-level modifier input types such as:

  • Geometry Input (object, collection, surface, self).
  • Map Input (value, attribute, texture).

Gizmos

Gizmos work also as an interface to a node-tree asset. As such we explored a few design options until we landed on Gizmo nodes which allowed for the most flexibility.

In essence, a Gizmo node is a node that brings a potential new data-entry in the node-tree, but effectivelly also works as a pass-through. The gizmo node necessarily has a Value input and output besides gizmo specific options.

In this Vase Asset example you can see that both the rotation and the button gizmo are dependent on the Height input.

The gizmos should work similarly to the camera and lamps gizmos. They are always available when the object is active, but their visibility can be controlled in the Viewport Gizmos menu.

Tools

The Group Input items get a new option to define a tool to be used to edit it.
This is exposed as an edit button by the modifier properties which sets:

  • Active tool (+ brush).
  • Active object (in the cases the attribute to be edited is in a different object).
  • Active mode.
  • Active attribute.

The idea is to focus on having built-in tools in Blender that support editing attributes (e.g., flow map) and then allowing those tools to be hooked up with different attributes.

This gets us 80% there, and help technical artists who are comfortable with the existing tools in Blender. It also bring the artist in the right context in case they need to access another tool for the same attribute.

More asset specific tools would be supported the moment Blender has support for node-based tools.


In some cases we still want to provide an easy to discover “tool” that is available when the asset is active. For that we can have an icon option which when available makes the tool show in the toolshelf. This would effectively be a light layer on top of the actual tool being used.

To get this working as elegant as possible we would need to have multi-data objects implemented, so there would be no need to change the active object.

33 Likes

Document updated with day 2 topics:

  • Realtime interaction.
  • Checkpoints
  • Tools and Gizmos
7 Likes

I will write the notes of the third and final day next week.

But we talked about:

  • gizmos (it is on Penpot)
  • Tools
  • Baking (checkpoint nodes)

Those are the white boards for tools and baking:


9 Likes

Do you have a link ?

1 Like

gizmos (it is on Penpot)

11 Likes

I’m done with the main pass for the notes. Sections added recently:

  • Gizmos
  • Checkpoints In-deep Discussion
  • Tools

There is still one thing missing which is the definition for checkpoints on where is the data stored. We talked about it and the idea is to follow what we have for simulation zones. I want to double-check with the rest of the team before putting this into words though.

3 Likes

I still don’t understand how we are going to be able to edit geometry that has been procedurally generated, to enter edit mode, and edit this geometry and exit edit mode and have it save itself all that with a single node and that there is no way to “Save” via a button…

I take an example, I made a scatter on a plane… but I would like to move 2 or 3 points of my scatter on which I made one of the instances of trees for example… so I will have to put the Freeze node after making my scatter?

I don’t understand how Blender wants to render a generation in edition then re-render it in “Generation”… I know very well that a certain concurrent software starting with the letter H does it and comes out of it excellently well! But I can’t wait to see the same thing in Blender… and especially need an explanation on really knowing if it’s really going to work like this… from procedural to manual editing and back to procedural…

In my dreams i was post on Rglick something similar and he working like that :

Please explain me clearly :slight_smile:

2 Likes

I think, and I’m not entirely sure, that this is similar, if you an edit, to applying the modifier, then going into Edit Mode, then using a new Geo Nodes modifier that works with the result of your edit, with some advantages probably, like when you edit the result with instances you modify the instances (this is the point where they are instanced) not the actual geo, so it retains the instancing, something that cannot be done applying the geo nodes modifier.

The moment you decide to edit (parts of) your procedural tree that part is no longer procedural.

That said, you can still do procedural effects on top of that afterwards.

I take an example, I made a scatter on a plane… but I would like to move 2 or 3 points of my scatter on which I made one of the instances of trees for example… so I will have to put the Freeze node after making my scatter?

Yes.

1 Like

Basically is like generate with GN, apply modifier the GN and call again the things generated precedaly…In a new GN ?

But this node make that all procedurally ?

humm is not a realy editable geometry :confused:

1 Like

NOt exactly because you cannot maintain the instances if you apply the modifier, but what you just said is what I explained in my previous post :slight_smile:

The geometry would be editable if you realize instances, you could also split your tree and extract just the few instances you want to touch and keep one part fully procedural and another part with the freeze node.

This is not fully procedural, this is to implement a semi-procedural workflow, which is needed in many situations in the end, but you are not forced to use it of course, the fully procedural nature of GN won’t be lost because of this, this is just an addition, one more workflow, a very useful one :slight_smile:

5 Likes

Sorry my question may seem stupid, but for example in my case of the image below by adding the freeze Node could I for example move the point which is framed in red?

Because basicly when the dot are scattered, you dont’t have the possibility to move it … is collided in here ! And i don’t want this :confused: And i wanted move this points manually for make an exact result without weight painting

Basically, in my head, this node really made it possible to go from a status that could not be manipulated manually to a status that could be manipulated and adjusted manually while returning to the procedural as soon as we were no longer on the node!

To make it work correctly you also need a split geometry node, so you separate the points you want to modify, the you use the freeze and then you use the join, if you don’t split it first you will be duplicating the points.

Yes, the freeze node would allow you to edit the points manually (in edit mode).

Now if you change the point density no change would happen (since you froze the result). To have it to work again you would need to unfreeze it, which means your changes made in Edit mode would be lost.

2 Likes