I was wondering if the the little octahedrons from the point distribute node are placeholders or how we want them to look? I just have some thoughts about them, I think small points might serve us better.
Here I have two cubes, the left one has just points and is being viewed in edit mode, the right one has instanced octahedrons on it. The octahedrons block seeing the back of certain meshes as you can see, so with just points you can see the shape better. As well, it is easier to see the distance between neighboring points with the left version, whereas with the octahedrons itās a bit tougher to gauge imo.
Here even though both cubes are quite dense, with the left one you can trace what normal comes from what point with a mild effort, whereas with the octahedrons itās quite difficult I think. Something about a point also seems to lend itself to being able to āpointā in any direction, with the octahedrons it kind of feels like āOh thatās where you are pointing?ā since they have geometry. A small mental disconnect I feel, but I think seeing normals will become more important later on as Geometry Nodes evolves.
These are particles from just the standard blender particle system. I think these guys have the right idea. A point with a small gradient on them for contrast if they overlap themselves on the screen. They are also white-ish which might be better than the black on grey from the other pictures. A bit hard to say.
This is edit mode points on the left, particles shrunk down in the middle, and octahedrons on the right. As you can probably see, Iām not sure what the ideal HSV for these guys should be, and in that light the octahedrons might seem like a nice choice, but I think if you look at the first picture from above again it illustrates that on more dense scattering the points win out imo. I think the point version would need some tweaking, and also probably some viewport settings, maybe in overlays to control size and maybe color.
I wonder what people think of all this? Am I alone on this?
Another consideration for pulling attribute data per point to influence an instanced/arrayed object would be for list control. Examples like what I tried with flowers would just take the value directly, things like @Miro_Horvath 's bird formation would want a random value per point, but some (a lot of) projects would want more measured control on an index by index basis.
If you were doing the popular Al Bahr tower kinetic sunshade example, youād want to be able to control tessellated tiles across a number of transforms based on the attributeās value at each point.
This is using Sverchok with a distance from point controlling the panels here.
Also for doing things like arrays / radial arrays etc and general procedural modelling, wouldnāt it make sense to have some basic generators like line, line segment, circle, plane, cube? Being able to just quickly generate a line or circle with a certain number of verts and then array other geo and/or objects/collections on it would be a massive time saver over the transform+join+transform+join ad infinitum.
Hah, thanks for that example, I tried this exact hex pattern thing yesterday but failed because I could not figure out how to scale the interior mesh of each hex relative to its center.
Just using Sverchok to generate an Ngon or line to hand over to GN here makes for easy arrays. Orienting the radial array is done by calculating the difference between points and object origin.
I agree that current points display is not the best, because they have a fixed size in world space . Btw points have a radius attribute and if you set it to 0 then only the square vertex dots remain visible. Unfortunately wireframe overlay needs to be enabled to see those dots and they are a bit too small by default and hard to see.
Maybe shaded appearance for points could be useful in some cases - for example they could inherit the surface normal and color from an object when generating a point cloud representation for that object. But for simply scattering particles, simple flat dots with fixed size on screen would work better. Maybe point distribute node could have an option to choose the display style?
I heard on Blender Today from Pablo that the since the current target is scattering, they were thinking of what would be the next target, and parametric modelling seemed the be a strong contender. Parametric primitives is pretty much the first step of that, so if they choose parametric modelling next I assume we would see that soon-ish. Pablo explicitly brought up a scenario when teaching blender where he had to explain that when you want to change segments on a sphere or anything, you needed to delete it, re-add it and un-collapse the pop-over to do it. He didnāt seem very happy in having to explain it that way, so I think itās on the teams mind for sure.
Itās a joy to follow the dailies improvements on geometry nodes !
Iāve got a question : will it be possible to smooth the result of volume to mesh node ?
Do you plan to integrate a dedicated node for that ? is it something to expect in the coming week/month or later on the todo ā¦
While Iām at it, is there a way to do some randomization on the material of the instances ?
Using Eevee Iāve tried different way to do it but all of them failedā¦
Worse case scenario I can apply the geo-modifier and go as usual but it kills a bit the awesomeness.
Yes, I just ran into the same issue, and was forced to use an additional remesh modifer after the nodes in the stack with smooth shading option turned on.
Also, it seems that you canāt add a material to the resultant mesh after using the volume to mesh node. The workaround of adding another remesh modifier does then allow a material to be added.
In your shader, the Object Info - Random socket will output a random value per instance with both Eevee and Cycles so you can use that to randomise colours within a shader or use to mix between several shaders as long as they exist within the same graph.
Thanks a lot @Erindale ! Iāve tried that but it wasānt working because of a small issue. I added an attribute randomize (position) to jitter the instanced meshs right before the Output . In that case it doesnāt work anymore. Thank again to put me back on track !
EDIT : This is also happening as soon as you apply a modifier after the geo-nodes. Probably the instances are converted to one mesh and they arenāt separated objects, sound logicial indeedā¦
@jamez, Thanks for the tips ! Iāll try all that !
Not yet, and I bet that, as one of the āweirdā modifiers, itāll be down the line. Iāve been making a skin-based human anatomy file though, with longterm plans of geo nodes on it for, say, pulsing veins or bulging muscles.
In the mean time, hereās some python for accessing skin weights. Useful for throwing skin modifiers on 50 separate vert chain objects. Definitely something Iād rather be doing through a node interface.
Iāve used multiple Transform nodes to transform the whole scene to prepare for clipping, which Iām guessing collapses to a single matrix applied to each point. From a performance perspective it works well - even for camera fly mode where recalculation is done for each frame.
It looks really cool but again, I think this kind of thing would be better implemented as a feature at a much lower level. For the viewport/eevee when moving to vk using compute shaders would spare the per-frame CPU->GPU back and forwards and be easy pickings for moving to mesh shaders in the future and occlusion culling. Cycles could just do it under the hood as well. The user should not have to concern themselves with setting up culling manually inside of the node system, performance should be available by default.