I’m not sure I follow what you mean. Let’s imagine a plausible future scenario here: If we drop a node for Suzanne, then a ‘mesh to volume node’ we get a volume Suzanne. The mesh is replaced, it does not overlap. The node based tree is a series of causalities. So if you wanted to go back to mesh Suzanne you could just mute the volume node that comes after it. It would be different from how it is now with modifiers, that is one advantage of geometry nodes.
The way you are describing it it seems like it will always require a bunch of separate objects to manipulate a volume. This feels clunky to me, in comparison to a mesh it would be like having to create a separate “face” object and then add a modifier to the mesh object to link in that face datablock.
From a purely UX perspective, it would be really nice if the volumetric geometry primitives (SDF, levelset etc) were hierarchically inside of the Volume object. So you could go into Volume Object edit mode and directly manipulate them, then pop back out and have the volume exist as a single unit.
I’m telling you they don’t have to be separate objects.
But in this case wouldn’t you have to modify node values to transform the shapes? This will certainly be useful for parametric modeling, but it doesn’t allow for a more immediate artist friendly workflow to manipulate volumes.
Here, you are talking like there is no Volume Object.
If geometry nodes are defining Object Type ; user will have to mute the node, each time, he has to tweak a mesh property like a shapekey.
Maybe you can explain some specific use cases and why they would involve separate objects, because I’m not sure what you are referring to.
Note that a volume datablock contains a list of grids, each with their own transform and data type.
Muting would not be necessary. If you go into edit mode you would be able to see both the input mesh and resulting geometry (which may be a mesh or something else). It’s the same as for modifiers where it displays both meshes.
You would have to yes, this is common in node based workflows like this. Since the last node that is “active” is what shows up in the viewport, if you wanted to tweak the shape, you would have to go back to the mesh node. In this case there would possibly have to be a “edit mesh” node in between Mesh Suzanne and the Mesh to Volume node, OR have properties that are editable in the mesh node. That being said though, if there were editable properties for primitives, then you wouldn’t have to mute the volume node, you could simply click on the Mesh Suzanne node and get it’s properties in the “N” or whatever it would be in geometry nodes.
Or this. I shouldn’t speak in probabilities, I am not a developer. I would believe in Brecht in this matter regardless.
I am disconcerted. The whole thread is about Volume Object Feedback.
And now, we are talking like it will become just a modifier / geometry node handled by a mesh object.
If it is the case, I am fine with that.
Or if you are saying that Volume Object will have an edit mode, I am fine with that. That is what we are asking for.
But if it is not, I am missing pieces of information on expected workflow that is probably very far from what we have, now.
Some relevant design docs are here:
We have multiple geometry datablock types: meshes, curves, metaball, volume, hair and pointcloud. When you assign such a geometry datablock to an object datablock, you get a “mesh object”, or “metaball object” or “volume object”. But that’s not the fundamental concept, it’s just a name we give to the combination of two datablocks.
Most of the time, the object will output the associated geometry datablock as-is or with only minor modifications. But an object can also output arbitrary geometry and object instances through modifiers. That’s already there, with particles, hair, smoke and instances.
Certainly geometry nodes will have workflow implications, but it doesn’t have to be very far for the common cases. We already have modifiers that do similar things.
Here’s an example use case:
I am a 3D modeler and want to make a Volumetric Blender Guru Donut. I am familiar with mesh modeling in Blender. I want to create a volumetric donut so I can cut the donut in half and see all the internal structure of the dough. I am more of a visual artist than a technical artist, as such I feel more comfortable moving things around in the viewport rather than tweaking values on a node.
shift-a add a Volume Object to the scene. I select it and press
tab to go into edit mode.
In edit mode, I press
shift-a to add a volumetric primitive (equivalent to the same menu in mesh edit mode) and I see:
├ Mesh Sampler
├ Point Cloud Sampler
├ Curve Sampler
I pick a
Torus from the
Distance Functions which then appears at the 3D cursor as a cloudy looking shape.
It has a visible center point and can be moved, rotated and scaled just like if I had added a Torus Mesh Object in object mode. It has it’s own properties in the properties panel so I can adjust parameters of the torus such as it’s major and minor radii. It works somewhat similarly to how the “active element” in metaball edit mode works.
I want my donut to have a solid crust, but also have lots of pockets of air on the inside, so I select the torus I just created and press
shift-d to duplicate the distance function. I make the new torus have a slightly smaller minor radius so it fits within the first torus. This represents the inner part of my donut. I uncheck a box in the properties panel to make this shape not directly contribute to the volume, since I’ll only be using it as a boolean target.
I need a way to cut the pockets of air out of the inner donut so I once again press
shift-a and this time I select
Voronoi from the
Volume Texture menu. This adds a space filling texture to the whole volume.
Now I have all the basic shapes I need. Next, I need to apply some boolean operations between my different volumetric shapes.
First, I select my volume texture and in its properties panel it has a
Boolean list. I click to add a
Boolean Intersection operation and then select the inner donut distance function from a dropdown list or using an eyedropper. I can now see the spongy interior of my donut.
I also need to remove the solid dough from the outer donut, so I select the outer donut and add a
Boolean Subtract operation to its
Boolean list, targeting the inner donut.
I now have what looks like a cloudy donut and if I use the slice visualization on the Volume Object properties I can see that it’s solid on the outside and full of tasty pockets of air on the inside.
I want to make it look like someone has taken a bite out of my donut. I could probably construct a bite shape using just volume primitives, but it would be much faster to just model it as a mesh.
tab to exit volume edit mode and I press
shift-a to add a mesh circle object. I press
tab to enter mesh edit mode, move some verts around, and then extrude to make my bite shape.
tab to exit mesh edit mode and select my donut volume. I press
tab to enter volume edit mode and press
shift-a to add a
Mesh Sampler. In its properties panel I select the bite mesh I just created. I now have a volumetric version of that mesh that I can independently move around. I mark this shape also as non-contributing to the final volume, since it will only be used as a boolean target.
I need to apply the bite to both the outer donut and the inner donut texture so I go to the Volume Object properties where my list of volumetric shapes is and I create a folder and put both of those shapes into the folder. I can then select the folder/group and then go to it’s properties (which are the same as any other volume shape) and add a boolean subtract operation to it targeting the bite volume I just created.
I now have a cloudy donut with pockets of air inside it and a bite taken out of it.
The last thing I need to do is make sure it will show up as an actual surface, so I add a
Volume Mesher Modifier to the
Volume Object. This changes it’s output from a volumetric cloud to a mesh.
I can now render it as I would any other solid object and it has an insane amount of internal detail that would be basically impossible to do with a normal mesh editing workflow.
If I want to make more donuts, it’s as easy as pressing tab to exit edit mode and then pressing
shift-d to duplicate the Volume Object. Since all of the shapes are in the frame of reference of the Volume object, I can now translate, rotate and scale my donut copies just as I would a Mesh Object.
I am a little bit worried about UI implications. How will be displayed properties of datablock when object will have several datablock types involved in its geometry nodetree ?
It is true that we have particles and hair working like that.
That goes with a properties tab dedicated to particles, extra panels when using hair.
Or for smoke simulation and instances, properties dispatched on several objects.
If we are using a geometry node, for example, to convert a Bezier Curve into a Mesh to add a displacement modifier node to its surface before converting it into Volume ; would it be possible to display properties of 3 datablock types in UI at same time ? will the user have to constantly select//deselect, mute/unmute nodes or will it be easy to customize display of properties and create presets ?
Ronan, presets can be created by node groups. This way the user can focus on the relevant information at different hierarchy levels. See the “High Level Abstraction” in the particles workshop write up. Hand-picked settings can also be linked to the Modifier (Node Group) Input to be exposed in the modifier stack UI.
OK. So, if we need one node socket per each tiny boolean option, name field, value… per property in Object Data tab to be able to create custom nodegroups ; development of those geometry nodes will take years (without taking into account how edit mode will know what datablock type to display in viewport).
During that time, is it possible to add same object/collection switch that exists for Boolean modifier to Mesh to Volume and Volume to Mesh modifiers ?
Thanks for the explanation. This use case is outside the scope of the volume object project. We’re not trying to build a new interactive parametric modeling / construction history type system, rather we’re targetting use cases like VFX and motion graphics.
If we were to build such an interactive parametric modeling system, I imagine it would not be specific to volume objects, but rather involve arbitrary geometry types and be based on geometry nodes. But personally I would not go in this direction at all. I think it’s better for Blender to remain focused on improving sculpt mode for more artistic workflow and geometry nodes for more technical workflow, rather than introducing another paradigm and spreading ourselves too thin.
This is going too far topic now. I think it was useful to explain how current usability issues with volumes will be addressed by geometry nodes. But specific UI details of how geometry nodes work should be discussed elsewhere.
I agree. But my problem comes from current valid observation.
It is hard to deal with several Volume Primitives actually.
And it is needed to deal with several overlapping Volumes Primitives to obtain a nice EEVEE render.
A dense volume object looks ugly in EEVEE.
We need several layers of decreasing density from center to periphery to obtain nice fog and clouds.
And for performance reason, it is better to have an opposite gradient of resolution.
Procedural textures are not able to use info of mesh surface to produce such gradients.
So, I will use several meshes as layers to do that.
But if such gradients can be added as modifiers of Volume Object, that will be welcomed.
Ok useful to know. I’ll focus my efforts on helping with geometry nodes in that case
Volume to Mesh modifier
(sorry, I didnt find better thread here)
I wanted to use blender’s volumetric shader as source, but It seems like only imported VDB files can be meshed by this modifier.
Is it something planed in a future (or did I miss something)?
Thank you for more info
Is it normal that negative scaling (like on -X ) of the volume objects causes bright white artifacts ?