Volume Object Feedback

Thanks for the info, once again not aware of the mechanics behind it all so it’s appreciated.

I’m bouncing off of Houdini where I know you can convert a polygonal mesh to either a distance VDB or a “fog vdb”. With the fog VDB you can use a volumeVOP to have noises interact with the VDB, and I believe they are greyscale, so I wasn’t sure how that works.

Certainly not trying to be “houdini has that so why don’t we?”, I just thought that behavior would be great to have.

The “distance VDB” is what I meant: A signed distance field or SDF.
In Houdini there is e.g. the VDB Analyze SOP that can output that gradient field that I mentioned to give you a field consisting of 3 floats (or one vector) that are kind of normals pointing to the surface.

1 Like

Some more test for fun and to understand how it works…

In order - Cloud, Magic, Voronoi, musgrave


3 Likes

I have download the latest build to be sure…

Not super familiar with the simplify panel,

But it seem that the simplify ‘‘volume resolution’’ option doesn’t work when you are in cycle preview mode (work in Eevee)

Yes, it’s an OpenGL viewport feature only feature at the moment.

@jacqueslucke I’m wondering if we should make this simplify setting apply to file load and the mesh to volume modifier instead of viewport drawing. It would make modifier evaluation faster, and later it could also use VDB LOD/mipmaps for faster loading.

On file read we currently we load the tree structure, and OpenVDB will load the voxel data on-demand. If we scale, that kind of lazy loading would be lost. There is some value in keeping it, to draw bounds or wireframes of the volume quickly without loading the full file.

Not sure how we would keep that when simplification is on. We could somehow use the original tree structure for that or scale down a tree structure without voxels for such case. But it seems messy.

1 Like

Would there be a global simplify setting for loaded vdb files, or how does that work? It can’t be a setting on the volume data block, due because then you can’t change it is linked into another file.

How should the simplify setting in the Mesh to Volume modifier work? Is it a percentage slider or simply a separate voxel_amount_viewport and voxel_size_viewport setting. A percentage slider has the benefit that it can be the same setting independent of the selected resolution mode.

Btw, I also noticed that reducing the resolution of a volume the way I do it in create_grid_with_changed_resolution is relatively slow (converting to a dense volume is even slower of course). The issue is that the running time is proportional to the number of voxels in the high resolution input grid and not to the low resolution output grid. Maybe I’m missing something, but it feels like there should be a faster way to create low resolution sparse volume.
I mainly noticed that in the Volume to Mesh modifier when a custom resolution is used: Creating the low resolution volume is actually the most expensive part of the modifier.

1 Like

The setting would remain in the same place in the UI and data structures. It would get applied globally when loading and creating OpenVDB volumes, rather than when creating the GPU texture.

The only really fast solution here is to create mipmaps and save them in the .vdb file, so we can load the minimal amount of data.

But it may already help to only scale down by powers of two, and use an optimizated implementation for that? tools/MultiResGrid.h in OpenVDB is the code to create mipmaps, probably that can be used.

1 Like

mesh to volume is great. Would be awesome if it could also take into consideration any particle systems present on the mesh being converted to the volume as well.

1 Like

Simulated and exported from Houdini

The normals are flipped i dont know why\

Edit:

2 problems with cycles

  • Changing frames when shading in the viewport make blender crach
  • The mesh that is from the volume doesn’t show when rendering with f12

Viewport

Render

This also has flipped normals

with smooth modifier and shading and default threshold

Please create a bug report on https://developer.blender.org. Also, it would help a lot if you would provide a simple example file that allows me to reproduce the issue.

You can add me as a subscriber to the report, so that I don’t miss it.

2 Likes

We have a new object type without manageable object data.

This is a fundamental issue with the Volume Object for me. I would expect the volume object to behave similarly to a grid/domain. One level down from a volume object should be a set of volumetric primitives (SDFs, Voxels, Levelset Samplers (a mesh sampler would do what the Mesh to Volume modifier does now)). These primitives should be “owned” by the volume object in the same way that the points, edges, and faces of a mesh are “owned” by a mesh object. SDFs and Levelset Samplers would need to be datablocks in their own right and would need a transform. The Voxel primitive would have a set of tools to directly “paint” voxels into the volume.

This kind of structure would make it possible to not only import externally produced OpenVDB files, but also create complex volumetric geometries from scratch in blender. It would allow for the same kind of direct and fast manipulation of volumes that mesh modeling affords.

2 Likes

The distinction is already there. The volume datablock is the volumetric primitive, the volume object is using that primitive. What’s missing is the geometry nodes system to take one or more geometry primitives as input and output an one ore more geometric primitives, without restrictions on types.

The volume datablock is intended to be used for fog, levelset and SDF volumes. OpenVDB does not restrict having grids of different types mixed, so adding that restriction on our side would not be good for compatibility.

That is currently a pain to use.

Whatever nodetree is made with geometry nodes, user will have to make adjustments about locations of those primitives in viewport.
And currently to move a Volume Primitive, user has to select and transform a mesh.
A mesh that requires to be displayed as its bounding box to avoid to occlude volume.
We are changing location by transforming mesh. But we have to select volume object to adjust density and resolution.
So, to adjust one Volume Primitive, user has to switch between 2 objects.
One that is unrecognizable because it is shown as its bounding box and another one that is almost always overlapped by another object and is mostly only selectable through outliner.

And to make a nice image, we have to vary density by using several volume primitives.
So, to just represent one thing as a cloud, we might end-up with dealing with 6 or 8 objects.

Yes, and geometry nodes without restriction on input and output types means you do not need multiple overlapping objects.

No. geometry nodes inputs will still be objects.
At the end, we could have one unique volume object.
But only if there are nodes that are allowing to define different densities and resolutions for those inputs at different area of space.
Densities and resolutions will have to be handled at nodes level, not at modifier level.

With geometry nodes, the display/selectability problem of unrecognizable inputs in 3D Viewport will remain.

I’m telling you they don’t have to be separate objects.

I’m not sure I follow what you mean. Let’s imagine a plausible future scenario here: If we drop a node for Suzanne, then a ‘mesh to volume node’ we get a volume Suzanne. The mesh is replaced, it does not overlap. The node based tree is a series of causalities. So if you wanted to go back to mesh Suzanne you could just mute the volume node that comes after it. It would be different from how it is now with modifiers, that is one advantage of geometry nodes.

The way you are describing it it seems like it will always require a bunch of separate objects to manipulate a volume. This feels clunky to me, in comparison to a mesh it would be like having to create a separate “face” object and then add a modifier to the mesh object to link in that face datablock.

From a purely UX perspective, it would be really nice if the volumetric geometry primitives (SDF, levelset etc) were hierarchically inside of the Volume object. So you could go into Volume Object edit mode and directly manipulate them, then pop back out and have the volume exist as a single unit.

1 Like

I’m telling you they don’t have to be separate objects.

But in this case wouldn’t you have to modify node values to transform the shapes? This will certainly be useful for parametric modeling, but it doesn’t allow for a more immediate artist friendly workflow to manipulate volumes.