Geometry Nodes

I was wondering if the the little octahedrons from the point distribute node are placeholders or how we want them to look? I just have some thoughts about them, I think small points might serve us better.


Here I have two cubes, the left one has just points and is being viewed in edit mode, the right one has instanced octahedrons on it. The octahedrons block seeing the back of certain meshes as you can see, so with just points you can see the shape better. As well, it is easier to see the distance between neighboring points with the left version, whereas with the octahedrons itā€™s a bit tougher to gauge imo.


Here even though both cubes are quite dense, with the left one you can trace what normal comes from what point with a mild effort, whereas with the octahedrons itā€™s quite difficult I think. Something about a point also seems to lend itself to being able to ā€œpointā€ in any direction, with the octahedrons it kind of feels like ā€œOh thatā€™s where you are pointing?ā€ since they have geometry. A small mental disconnect I feel, but I think seeing normals will become more important later on as Geometry Nodes evolves.


These are particles from just the standard blender particle system. I think these guys have the right idea. A point with a small gradient on them for contrast if they overlap themselves on the screen. They are also white-ish which might be better than the black on grey from the other pictures. A bit hard to say.


This is edit mode points on the left, particles shrunk down in the middle, and octahedrons on the right. As you can probably see, Iā€™m not sure what the ideal HSV for these guys should be, and in that light the octahedrons might seem like a nice choice, but I think if you look at the first picture from above again it illustrates that on more dense scattering the points win out imo. I think the point version would need some tweaking, and also probably some viewport settings, maybe in overlays to control size and maybe color.

I wonder what people think of all this? Am I alone on this?

3 Likes

Another consideration for pulling attribute data per point to influence an instanced/arrayed object would be for list control. Examples like what I tried with flowers would just take the value directly, things like @Miro_Horvath 's bird formation would want a random value per point, but some (a lot of) projects would want more measured control on an index by index basis.

If you were doing the popular Al Bahr tower kinetic sunshade example, youā€™d want to be able to control tessellated tiles across a number of transforms based on the attributeā€™s value at each point.

This is using Sverchok with a distance from point controlling the panels here.

Also for doing things like arrays / radial arrays etc and general procedural modelling, wouldnā€™t it make sense to have some basic generators like line, line segment, circle, plane, cube? Being able to just quickly generate a line or circle with a certain number of verts and then array other geo and/or objects/collections on it would be a massive time saver over the transform+join+transform+join ad infinitum.

10 Likes

Indeed this would be very welcome feature :wink:

Hah, thanks for that example, I tried this exact hex pattern thing yesterday but failed because I could not figure out how to scale the interior mesh of each hex relative to its center.

I tried to access Shape keys like a regular attribute, but it didnā€™t work.
they are not accessible for the moment. right ?

that would open a hole lot of new possibilities!! :slight_smile:

1 Like

Just using Sverchok to generate an Ngon or line to hand over to GN here makes for easy arrays. Orienting the radial array is done by calculating the difference between points and object origin.

I agree that current points display is not the best, because they have a fixed size in world space . Btw points have a radius attribute and if you set it to 0 then only the square vertex dots remain visible. Unfortunately wireframe overlay needs to be enabled to see those dots and they are a bit too small by default and hard to see.
Maybe shaded appearance for points could be useful in some cases - for example they could inherit the surface normal and color from an object when generating a point cloud representation for that object. But for simply scattering particles, simple flat dots with fixed size on screen would work better. Maybe point distribute node could have an option to choose the display style?

I heard on Blender Today from Pablo that the since the current target is scattering, they were thinking of what would be the next target, and parametric modelling seemed the be a strong contender. Parametric primitives is pretty much the first step of that, so if they choose parametric modelling next I assume we would see that soon-ish. Pablo explicitly brought up a scenario when teaching blender where he had to explain that when you want to change segments on a sphere or anything, you needed to delete it, re-add it and un-collapse the pop-over to do it. He didnā€™t seem very happy in having to explain it that way, so I think itā€™s on the teams mind for sure.

5 Likes


Speak of the devil?

4 Likes

Itā€™s a joy to follow the dailies improvements on geometry nodes !
Iā€™ve got a question : will it be possible to smooth the result of volume to mesh node ?
Do you plan to integrate a dedicated node for that ? is it something to expect in the coming week/month or later on the todo ā€¦

1 Like

While Iā€™m at it, is there a way to do some randomization on the material of the instances ?
Using Eevee Iā€™ve tried different way to do it but all of them failedā€¦
Worse case scenario I can apply the geo-modifier and go as usual but it kills a bit the awesomeness.

1 Like

Yes, I just ran into the same issue, and was forced to use an additional remesh modifer after the nodes in the stack with smooth shading option turned on.

Also, it seems that you canā€™t add a material to the resultant mesh after using the volume to mesh node. The workaround of adding another remesh modifier does then allow a material to be added.

1 Like

In your shader, the Object Info - Random socket will output a random value per instance with both Eevee and Cycles so you can use that to randomise colours within a shader or use to mix between several shaders as long as they exist within the same graph.

1 Like

Thanks a lot @Erindale ! Iā€™ve tried that but it wasā€™nt working because of a small issue. I added an attribute randomize (position) to jitter the instanced meshs right before the Output . In that case it doesnā€™t work anymore. Thank again to put me back on track !

EDIT : This is also happening as soon as you apply a modifier after the geo-nodes. Probably the instances are converted to one mesh and they arenā€™t separated objects, sound logicial indeedā€¦

@jamez, Thanks for the tips ! Iā€™ll try all that !

1 Like

Where was this posted?

1 Like

Do you know if the skin modifier radius is an attribute I can access?

Not yet, and I bet that, as one of the ā€œweirdā€ modifiers, itā€™ll be down the line. Iā€™ve been making a skin-based human anatomy file though, with longterm plans of geo nodes on it for, say, pulsing veins or bulging muscles.

In the mean time, hereā€™s some python for accessing skin weights. Useful for throwing skin modifiers on 50 separate vert chain objects. Definitely something Iā€™d rather be doing through a node interface.

import bpy

objs = bpy.context.selected_objects
radius = 0.003

for obj in objs:
bpy.context.view_layer.objects.active = obj
obj.modifiers.new(name=ā€˜Skinā€™, type= ā€˜SKINā€™)

for vert in obj.data.skin_vertices[0].data:
    vert.radius = radius, radius
1 Like

@BD3D Iā€™ve also built a node group for camera field-of-view clipping.

Iā€™ve used multiple Transform nodes to transform the whole scene to prepare for clipping, which Iā€™m guessing collapses to a single matrix applied to each point. From a performance perspective it works well - even for camera fly mode where recalculation is done for each frame.

You can download for free, and Iā€™m happy for other people to build on this work.
Download link
YouTube node group details

11 Likes

It looks really cool but again, I think this kind of thing would be better implemented as a feature at a much lower level. For the viewport/eevee when moving to vk using compute shaders would spare the per-frame CPU->GPU back and forwards and be easy pickings for moving to mesh shaders in the future and occlusion culling. Cycles could just do it under the hood as well. The user should not have to concern themselves with setting up culling manually inside of the node system, performance should be available by default.

2 Likes