Geometry Nodes

Ahhh indeed, with attributes decoupled from the geometry, that’s not as obvious as I thought. I see. Thanks !

The current point instance node creates instances, but the geometry is automatically turned into ‘real’ geometry if needed. So i’d guess it wouldn’t be too hard to make the point instance node accept plain mesh data. However, that would be a lot less efficient so it would be nice to have a way to communicate the difference between ‘instancing’ and ‘copying’ the input meshes to the user. That would be useful anyway, because the autoconversion can be rather confusing for the user as it is, currently.

Please note I’m not a GN developer, just screaming from the sidelines :wink:

1 Like

I totally agree with every single point of @Syscrusher . I’m not a dev but I wonder if a primitive mesh created with a primitive node inside the tree, could be converted into an object (like packed or compressed) behind the scenes , before being instanced , avoiding the loss of performance that would occur by of copying the plain geometry.

This would actually apply to any piece geometry of geometry at any point of the tree. Maybe a “pack geometry” node could be useful to do that. It could have different modes to specify where to place the pivot of the packed geometry object, like int the centroid, or on the boundary of bounding box, or manually placed.

The generated Packed object shouldn’t exist in the scene of course, and be contained only in the node tree. Maybe it could be output in the scene if needed by the user, with a “spawn object” or “output object” node…

1 Like

Hello there,

With the arrival of the raycast node, I decided to make a view frustrum group that hides instances that are outside the view. As the raycast is only in the latest master, the nodegroups require blender 3.0 alpha.

When doing the groups, I noticed two things :

  • The attribute workflow is hard to manage properly : in a nodegroup, I want to use multiple attributes, and I want to delete them for the group output. This results in a lots of wires all around the nodes, to remove the attributes with the Attribute Remove. Maybe finding a way to output only some attributes could help.
  • It would be really useful to have a Active Camera node that outputs the FOV, the resolution, in addition to the loc/rot/scale of the object. This could help a lot with the integration of GeoNodes with rendering.

I put the two groups I made in a file and also an example file for you to look at it :

If you like it consider rating. I share it because this could be useful for someone.
1 Like

It’s not that an object is a packed version of meshdata. An object contains of two parts: the object and the object-data. The second part is the actual mesh data, while the first part contains the transforms for the whole object (simplified explanation warning). When you instance an object, you only copy the object part, but you reference (link) the object-data part. This means it’s very efficient, because you don’t need to copy the (large) mesh data.

A mesh primitive in GN just generates raw mesh data. Turning that into an object (i.e. consisting of an object and an object-data part) doesn’t really fit into the blender architecture I’d think. That would lead to ‘hidden’ objects which exist but are not shown in the outliner.

I’m not saying it’s impossible to generate objects from a GN tree, but it would need ot be very thoroughly though over and designed to prevent an unmaintainable mess.

So I don’t really see that happening anytime soon. If you need the efficiency of instancing, you’ll have to do it explicitly and keep the instanced object outside of your GN tree.

However just duplicating existing mesh data imho is something which would fit GN very well.

1 Like

I see your point, I’m aware of the Object and the object-data difference, with “packing” I exactly meant creating “hidden objects”, not visible in the outliner, with object container and data to be instanced, of course this process happens behind the scenes, transparently to the user. Anyway, I’m totally ignorant on blender internal software architecture, so I don’t know if it even makes sense to talk about how to implement something like that internally.

It would still be great if it could be possible to have a workflow that doesn’t necessary relies on external objects, like the one described by @dfelinto in this post (on-the fly creation, opposed to the asset workflow):

It would be even greater if instancing objects created directly in the tree, wouldn’t generate a performance loss due to raw mesh data copying and not instancing.

In houdini you can just create objects primitives, edit them with nodes and then copy instances to points, It would be handy to have the same workflow possible. To be honest, I don’t know if Houdini instances geometry or just copies it, losing performance as well.

1 Like

We’ve mentioned this before in a few places. The proper solution is instancing geometry directly-- not requiring on object for instances created in geometry nodes. Work on that is actually planned for this week.


Work on that is actually planned for this week.

why would we still develop new nodes if there will be a rework of the whole data flow structure in X weeks? is there somewhere was can read about the planning of geonode for 3.0?

1 Like

I think that it’s completely impossible to get the ALPHA channel information from a color attribute
let me know if i’m wrong
right now this information is visible but not accessible which is quite weird

As weight / vertex paint is very slow in case of higher poly meshes, is there any more responsive method for defining the distribution?
Or can we expect the performance of those 2 features will get improved significantly anytime soon?
Are there any plans in the term of the general performance so we can work smoothly even in projects where we have hundreds of thousands (or millions) instances and “emitters” of i.e. 5M tris?

May the Attribute Transfer node be useful?

Trying something …

weight painting is done in “Group” on a lower density grid. The Attribute Transfer node projects it to a vertex group inside the higher density grid.

That way editing is done on lower resolution meshes.
But I guess one would need to use many more small meshes and patch their weigts with Attribute Transfer to the higher density mesh.
I am not sure how this would impact performance.

– Edit –
I tried with high density mesh.
If lo-res group is projected into hi-res, painting on lo-res becomes slow. But if projection is disabled during weight paint, painting works fluently.

I’ve been struggling recently to get colors applied to instances in Geometry nodes. I’m trying to apply them using Attribute Fill which works for a mesh but not for instanced geometry. Does anyone have any ideas as to why this might be happening? Each of the geometry instances have a vertex color Col channel and the Col attribute appears in the Spreadsheet. I’m assuming this just hasn’t been implemented yet.

I’m using Blender 2.93.1 on Win10.

I’ve been having the same problem with Vertex Colors as attributes. This has become really problematic for me because I can’t export any changes to UVs or Vertex Colors with the current set up.

Attributes on instances doesn’t work (yet?). Entagma recently did a video showing how you can transfer point attributes to instanced geometry. But you need to collapse instances into a single mesh so it’s not usable in most cases.


Thanks for the tip, but this method wouldn’t allow any precise work as you would be limited by the detail of the proxy (lowpoly) mesh. Also when editing the distribution, you need to see immediately (responsively) what you are doing, but even in this case you would still experience the lag after each stroke - working like that for 8 hours or so will make one crazy.
Yes, Blender performance issues are not the problem of GNodes, but if it makes the features like this one barely usable for any larger projects, I think it is fair to mention it here - hoping devs will notice and fix it finally (i.e. sculpting is infinitely faster than V/W paint, so it must be doable?).
I have already spammed various Blender forums in the past regarding this, but maybe thanks to GNodes somebody will realize that such area of Blender needs some care, otherwise tools like this can’t really show it’s full potential.

I am trying to replicate this one code snippet from an unnamed scripting language in GN:

vector nml  = abs(@N);

if      (nml.x == max(nml)) { @uv = set(@P.y, @P.z, 0);}
else if (nml.y == max(nml)) { @uv = set(@P.x, @P.z, 0);}
else                        { @uv = set(@P.x, @P.y, 0);}

@uv /= 400;

Just even thinking about this formula with the current state of GN makes my head hurt. It should be a simple formula to generate box mapping by doing a few if conditions, testing which of the 3 axes is the face normal aligned to most, and setting the UVs based on world space vertex positions.

It was trivial to put together, but I am having a hard time finding a reasonable approach to construct it in GN and actually output UVs working with Displace modifier.

Any ideas?


Hello all.
Is there any work being done on adding shape keys as attribute? if yes can anyone point me to it please? If not then is it possible to expose an attribute say position for example, do some operations on it (using another similar geometry’s attribute) and feed it back to the original geometry?

Thank you! That’s super helpful! Glad to see 3.0 will at least have a workaround.

Not sure I nailed this but this is how I’d approach it:

Writing shader code used to demand a lot more replacement of conditional statements with things like mix or lerp. I couldn’t figure out how to get branching logic using geometry nodes out of the box so I ended up approaching it as if I were writing a shader for a low spec device.


Gif of box mapping in action: