Geometry Nodes

No that does not make sense. You are suggesting a new functionality, because the current auto smooth does not function that way. By makiing it function that way, you are already making a new functionality.

Because it just does not work that way. Let’s see an example:
image
Here we have a mesh of 6 faces
This is what it will look like with current auto smooth:
image
Only the edge in the middle is sharp.

And this is what it would look like if it works to fill shade_smooth attributes:
image
As you can see, 3 edges are sharp. This is because the face angle thing selects an edge, and if you put that selection to shade_smooth it gets converted to face, therefore this two faces will be completely flat shaded:
image

The currently auto smooth keeps all faces smoothed, just splits the normal of the selected edge, so it produces the result we want.

Like how the current auto smooth does when you type 180 degrees into it?

EDIT:
This is how I emulate it by the way, I used the edge crease for selecting the edge. I heard that Select by Angle node will come as a splited node from the Bevel modifier node, until then there is really no good ways to do this other than this crease.

Sorry but at this point you are just inventing the problems and overcomplicating things. There’s just nothing that prevents this from being done as a GN node on a technical level. You are nitpicking in the implementation details, but that’s not what users really care about.

Blender is not the only 3D software out there. There’s multiple other packages which allow for procedural modeling workflows, and most of them offer a simple node or modifier called something along the lines of “Normal” which usually contains mainly an angle parameter. And it works, procedurally. There can be modifications to normals before and after that node modifier, and it still works as expected.

Nitpicking in the implementation details neither me nor you will be responsible for and inventing artificial problems is opposite direction to improvement.

You need to understand that I am doing excatly what you said. The only thing different is that instead of select the edge using angle, I select the edge manualy, becuase there is currently no good ways to do it in GN, but it does not matter. The important thing is this is a data level problem (edge data vs face data), whch means it would be the same at technical level.

Yes, I agree, but you keep sounding like it has no solution or something. Whoever will be implementing it will choose the way in which to solve it. So I don’t get a point of this debate. Are you against having an Auto Smooth node?

I am for the auto smooth node, and I think the node should just do excatly what the current auto smooth does. The original problem was GN creating a new geomtry thus we lose the auto smooth data. The solution would be just moving it into node level.
What you said was to change the auto smooth node completely and make it partly overlap smooth_shading’s functionality, which kind of over complicates things.

That’s where the confusion comes from. I did not say that it should happen. All I said was that it was one of the possibilities. I don’t really care about how it’s done as long as it works as expected from user’s point of view.

Hello

When using the separate point node on vertices it removes ton of attribute, why is this happening?

Is there a way to automatically transfer all other domain attribute to vertices, like the point distribution node is doing?

(BTW that’s another good example on why a mesh to pointcloud node is necessary)

There is an attribute convert node, but be careful when converting your attributes to another domain, it can be tricky sometimes. Like in my case, only one edge is creased, and after converting there are two points having values, if I convert it back to edge, I guess three edges will have values, and those values would not be the same as the original value.

Like in my case, only one edge is creased, and after converting there are two points having values, if I convert it back to edge, I guess three edges will have values, and those values would not be the same as the original value.

Hmmm weird

There is an attribute convert node,

Yes i know but that suggest that everytime we want to scatter with mesh vertices we will need to convert all attribute we need, not fair because everything is done automatically with the point distribute node!
:cry:

Not really. That’s why they are different domains. You just can’t represent edge information on vertices and vice versa. The same goes for faces

I think point distribute is doing a different thing. It ports the attributes to the point cloud, a completely diifferent object type. But the convert node is from one domain on the object to another domain of the object. I guess you are right about:

1 Like

Hello, all. This is my first post on Devtalk, though I’ve been using Blender since 2.49b.

I’ve been working a lot with Geometry Nodes for some architectural modeling, and they’re fantastic so far. The one limitation I’ve found seriously impeding is the requirement of input objects for the point nodes. It causes the node graph to need an external reference, and if one is sharing the graph across projects or with team members, the more it can be self-contained the less fragile it is.

There’s been a lot written about the valid technical reasons why Geometry Nodes need to be attached to an Object, and why they can’t just internally generate more Objects, but I think the discussions so far are missing an opportunity that this could be done without breaking Blender’s assumptions about the Modifier stack.

Here’s an example use-case: I want to create a Grid primitive in Geometry Nodes, then place one of the mesh primitives such as a sphere at every point. I do not need those to be objects, just new vertices, edges, and faces that are part of the same mesh.

Now, I could easily make a Geometry Nodes graph that generates a sphere with the new primitives node. Then I could pass that to multiple Transform nodes, combine their outputs, and have exactly the same thing – just really inefficiently. :slight_smile:

My question is, what are the technical reasons why the point nodes couldn’t take geometry input ports and replicate those vertices and faces as needed, with an implicit Join within the point nodes, rather than replicating Object instances? I am aware of the ramifications for GPU instancing if there is one large mesh instead of multiple smaller meshes that can be instanced; obviously what I propose would (like any other tool) need to be used appropriately.

Another thing that might be useful is to allow Geometry Nodes to be attached as a modifier to an Empty object, for those cases where one would otherwise sever the connection from the input object and generate the geometry to the Group Output.

After posting this, a third option occurred to me: Is it feasible to have Blender offer static (immutable) instances of some existing primitives, at unit dimensions, that could be retrieved as predefined (and shared) Object inputs to Geometry Nodes? If the Geometry Node graph needs a default cube a an input, could this be an implicit system-wide resource that can’t be altered but doesn’t have to exist in each Scene?

I’m considering taking a dive into the code and mocking up one or more of these ideas, but before I do that I wanted to ask about the concept here to see if I’m overlooking something obvious. I had no trouble reading the source for some existing Geo Nodes, but I realize I lack the deeper understanding of the application context and the assumptions that drive the API contracts between modules.

Thanks for listening.

2 Likes

Can we talk about the viewer node a bit? What does it do? does it show its input as an overlay on top of the final geometry? If this is meant for visually inspecting the node tree at several points (just like the attribute spreadsheet, but in a visual manner), wouldn’t it be more straightfoward to just have a toggle on every node instead of an additional node that needs connecting? I don’t see the added value of having a node for this?

Cheers,

Hadrien

The first version of the viewer node (likely to be committed quite soon) just shows the data in the spreadsheet, as a more flexible replacement for the monitor icon on the top of every node. Soon after (hopefully soon, definitely for 3.0), it will display geometry in the viewport too.

Using a toggle on the node is simple at first, but it comes with issues. What happens when a node outputs two geometries? How can it possibly choose which attribute to visualize when you want to do that? What about when you want to keep “visualization points” around for later? etc. It’s just more explicit.

The patch includes a shortcut to “view” the node under the cursor, so the speed of the previous workflow is still available.

I hope that explains it.

2 Likes

That makes sense !

I don’t see how that’s a problem the viewer node solves ?

I don’t understand this one

That’s great because that was my main concern !

Sorry, I was a bit vague at some points there wasn’t I.

I was basically referring to this image from Jacques’ field proposal.
image

I meant that you could leave multiple viewer nodes sitting around in your node tree, then all you have to do is click to make it you would see the data. We could potentially support multiple active viewer nodes as well.

5 Likes

Ahhh indeed, with attributes decoupled from the geometry, that’s not as obvious as I thought. I see. Thanks !

The current point instance node creates instances, but the geometry is automatically turned into ‘real’ geometry if needed. So i’d guess it wouldn’t be too hard to make the point instance node accept plain mesh data. However, that would be a lot less efficient so it would be nice to have a way to communicate the difference between ‘instancing’ and ‘copying’ the input meshes to the user. That would be useful anyway, because the autoconversion can be rather confusing for the user as it is, currently.

Please note I’m not a GN developer, just screaming from the sidelines :wink:

1 Like

I totally agree with every single point of @Syscrusher . I’m not a dev but I wonder if a primitive mesh created with a primitive node inside the tree, could be converted into an object (like packed or compressed) behind the scenes , before being instanced , avoiding the loss of performance that would occur by of copying the plain geometry.

This would actually apply to any piece geometry of geometry at any point of the tree. Maybe a “pack geometry” node could be useful to do that. It could have different modes to specify where to place the pivot of the packed geometry object, like int the centroid, or on the boundary of bounding box, or manually placed.

The generated Packed object shouldn’t exist in the scene of course, and be contained only in the node tree. Maybe it could be output in the scene if needed by the user, with a “spawn object” or “output object” node…

1 Like

Hello there,

With the arrival of the raycast node, I decided to make a view frustrum group that hides instances that are outside the view. As the raycast is only in the latest master, the nodegroups require blender 3.0 alpha.

When doing the groups, I noticed two things :

  • The attribute workflow is hard to manage properly : in a nodegroup, I want to use multiple attributes, and I want to delete them for the group output. This results in a lots of wires all around the nodes, to remove the attributes with the Attribute Remove. Maybe finding a way to output only some attributes could help.
  • It would be really useful to have a Active Camera node that outputs the FOV, the resolution, in addition to the loc/rot/scale of the object. This could help a lot with the integration of GeoNodes with rendering.

I put the two groups I made in a file and also an example file for you to look at it :

If you like it consider rating. I share it because this could be useful for someone.
1 Like