As far as i remember they settled on the “twice as tall” socket shown here:
Big fan of this design!
I’m wondering though if there are circumstances for the join geo node where the order of the inputs matter?
Like the origin for example … how do we see wich input was “first” when there are mulitple inputs in one socket? A list in the n-panel to reorder perhaps?
Oh yeah that looks good. I think it only applies if there is no order at all, although… your idea of doing the reordering in the sidebar sounds good. In fact, I’d be all for having an alternate compact view with most node settings hidden and accessible from the sidebar, as it has been suggested already. Where is that mockup from ?
The mockup was posted by Pablo Vazquez in the geometry-nodes-squad channel on blender.chat.
With origin I mean the object origin of the object itself, the orange dot in the middle of the default cube. If you join two cubes in the 3d view (not with geometry nodes) the newly created joined object will have the origin of the object you selected last (the active object). I’m wondering if there are more situations where the order of joinig is important … maybe materials? Or maybe if they have attributes of the same name (like a vertex group for example), is one overriding the other? If yes, then the order of joining matters and we need a way to see that order, otherwise its guesswork on those multi input sockets.
I just tested the Geometry Nodes branch, and I went back to my main concern about all the nodes project, performance.
So far I’m a bit worried, if we have to scatter pebbles over a big area it would not be weird to have more than a million pebbles scattered, however performance scattering a million spheres is awful, less than a frame per second I’m afraid.
Good idea, I’ll test, but I think that does not matter much, this is a new system being newly developed, so old performance issues should not be a problem here, part of the evolution must be performance wise
EDIT: Performance is more or less as bad I’m afraid.
Performance is clearly being talked about in the chat: https://blender.chat/channel/geometry-nodes-squad
Caching and different distribution methods are being talked about. As well as making the noise pattern tile-able which should also apparently help with performance.
I’m more worried about execution performance, I mean the amount of geometry that we can easily handle in the viewport, 1 million should be something easy, it’s a nowadays basic object with some detail, and right now if you go into edit mode of that plane with 1 million faces it’s absurdly slow, I imagine that also affects Geometry Nodes
That was planned for 3.0 I think, but it may have some delays, I’m not saying that for any specific reason, just that it’s a big change and it may encounter some stones in the road
I hope this gets improved before Vulkan, performance it’s pretty important, and not being able to work in dense objects it’s also a pretty important topic, specially if we want to do procedural work in them, because procedural work with nodes it’s precisely to work at a big scale, not just in a single object or in a small area
Is the concern on performance based on the amount of point calculation? Or geometery it creates? I was just thinking it would be nice to have a node that can decimate geometry to a bounding box for viewports and full geometry for rendering, like the lodify.py script.
Whether Blender uses Vulkan for rendering doesn’t really have anything to do with the geometry nodes performance. At best it’s a tangential relationship.
The performance being discussed is of the “Point Distribute” node. Algorithms that distribute points in nice patterns like the “Poisson Disk” scattering pursued just tend to be performance heavy.
The same is true for the software with the big H that mustn’t be mentioned in this forum.
If you scatter 1.000.000 points on a grid in a “dumb” way it’s rather fast but results in ugly patterns with a lot of overlapping point positions. In some places there are clumps of points in others there are holes without any points.
If you the activate the “relax” option, the whole process gets slower… a lot slower. This has nothing to do with the graphics card or viewport display but with the underlying algorithms or methods to push points apart. That’s also the reason why particle based fluid simulations become slow so quickly, not because the GPU can’t display them any faster but because there’s a lot going on under the hood.
EDIT: OK, sorry. I just gave the blend file from JuanGea a try and now I know what you mean. It’s the pure viewport performance you’re talking about. And it’s, well, quite sluggish.
Just for the protocol:
If I take the “1million_test.blend” file and rebuild the same setup with the former “Dupliverts” workflow by parenting the icosphere to the grid containing that 1mio+ verts and setting the “Instancing” to “Vertices” I get exactly the same viewport performance (in my case 2 fps with a 1080ti) as with the Geometry Nodes branch and the original blend file.
So the culprit is not the new Geometry Nodes but the viewport itself.