Geometry Nodes

Some way to improve point distribution would help. Merge-by-distance is an example.

Thinking about a distribution which creates accumulations and gaps. If someone likes to fully cover a ground with instances, he needs to create excessive instances to get rid of the gaps.

I think youā€™re right. But a lot of the code is actually still useful for this, and will be when we start working on particles too.

Right, I meant the data being passed between nodes.

For now weā€™re keeping the data and attributes stored in the same place, which means for now the attribute workflow will be the way to do this stuff. Personally I think the implicit connection between a list and the geometry might be a bit confusing. But using a string of attribute nodes could also nullify some of the benefit of a node based workflow, so I get there arguments for both ways. Iā€™m sure these sorts of conversations will come up more when we focus on procedural modeling use cases.

Yes, thatā€™s the idea. Itā€™s called a ā€œMinimum Distanceā€ in our designs. More info here: https://developer.blender.org/T82690

Iā€™m not sure about this one, Iā€™ll bring it up with the team.

Yes, an important idea, though it needs quite a bit of design work. It potentially relates to something like this too: https://developer.blender.org/D8637

Theoretically the object info node could be used for this, it has a geometry output which could contain attributes.

Yes! Itā€™s more complicated obviously but it does make sense. The largest complication is that objects could have a different evaluated type than they start with (or even multiple output object types), which complicates the fundamental object type -> object data relationship in Blender. This now works for the point cloud object, because it didnā€™t have any existing modifier evaluation system to convert. Applying a node modifier should create all the objects it needs to, potentially with shared data if there are instances.

As far as I know we havenā€™t talked about this. I see this as an orthogonal problem though. Evaluating the modifier and then exporting instance positions in an exporter / python would do the trick. I guess itā€™s easy to see the possibilities with the spreadsheet editor concept I linked above though.

4 Likes

My experience for this kind of thing is mostly in Houdini; sorry to bother you but can I ask how you create attributes in sverchok/grasshopper if itā€™s list management?

In houdini there are a bunch of nodes to deal with attributes, but with the ā€œAttributeCreateā€ node you can manually type an attribute name and from that you can use it however you like.

In terms of list management there is ā€œAttributeVopā€, where if you click on it will drop into ā€œintoā€ that node which exposes a monolithic node with several commonplace attributes such as vertex positions, vertex number, normals, color etc, I think there are 18 ones they deemed common enough to place in it. If you need to make your own attribute to use within this node, you just use ā€œAddattributeā€ where you manually type a name and adjust settings for what it is supposed to be; it will also be available globally from other means. Anyways you can use AttibuteVop to do any number of operations from it, and itā€™s in a completely ā€œblender wayā€, just dragging noodles from the attribute output form the monolithic node. Itā€™s not any different from say the geometry node we have in shading.

That being said, Houdini also has ā€œAttributeWrangleā€ where the node is just a text field. You just type vexcode into it, so you can call on existing attributes or create them there. Essentially AttributeWrangle and AttributeVop are the same thing, you can do the same thing in either. So effectively Houdini has both methods.

For my two cents, I do like typing the attributes out, it is a little cleaner since you donā€™t need to summon a node or multiple nodes all the time at the are of the operation that needs them, or do the classic blender move of dragging really long noodles from a single texture coordinate node to slightly save on efficiency. But much like houdini, I love the idea of having both ways to work.

1 Like

In my workflow I donā€™t really use attributes. If I need to send data to shaders then Iā€™ve used vertex colours but not attributes. Iā€™m honestly not too aware of what is possible in Blender with attributes and whether you can just send any chosen data with any name. If that is possible then that would be extremely helpful in procedural modelling but you would still want your vert positions exposed so that you can do all the things that require them for calculations (instance objects to verts, custom matrix generation etc). In Sverchok vertex number is implicit just as the index of the XYZ vector in the list so no need to separate those properties arbitrarily.

There are some attribute nodes and obviously script nodes and things can be used to call different data but Iā€™ve sort of avoided them. I find attributes off putting because, while no-doubt powerful, you canā€™t just look at a node and understand it if thereā€™s a blank field expecting some string that you have to just know. It creates a barrier to being able to intuit your way through a problem because if you donā€™t know, you donā€™t know. Versus grabbing a list of vertices and checking a text output to see if you can see the pattern intuitively and then create the maths and list operations accordingly. Iā€™m a super visual person though so for my workflow, seeing all the inputs on the front of the nodes and seeing the noodles helps me recognise patterns in data really quickly. Those properties panels separated from the node has always kept me from getting into Houdini and Substance Designer but I know thatā€™s just a different workflow.

1 Like

I made another little test scene to try out the Join Geometry node! Great stuff and much more performant without having all of those Boolean Unions everywhere!

I few things came out while I was working:

  • The Join Geo node unassigns materials. The Boolean Union was even preserving material indexes for things like different glass / frame materials but as soon as it went through the Join node, everything went back to clay.
  • I love the feature to be able to click on the modifier and have it jump you to the correct node tree! Very useful when scenes get more complex. It was a little bit too aggressive though I thought as I couldnā€™t actually change to look at any other node trees while the modifier was selected (blue outline). If it could let you view another node tree and just automatically deselect the modifier that would be great! Or even if the modifier just got the highlight around it to show that you were viewing the node tree associated with that modifier (and retain the click to focus feature).
  • A couple of functionality issues with the Vector Math nodes. Maximum, Minimum, Floor, Ceil, Fraction etc (all the right hand column options) seemed to have (0,0,0) outputs regardless of the inputs.
  • A few functions were missing from the scalar Math node also, I found Wrap wasnā€™t working but didnā€™t check for others.
  • You can see at the end of my video when Iā€™m clicking through the various node groups / components that theyā€™re all kind of broken in their proportions. I need to confirm this but it looked like the exposed vector input on the modifier wasnā€™t being refreshed to reflect the default values of the selected node group. I fixed the proportions for the chair (an exposed XYZ vector on the modifier) and then when I loaded the table or the window (both of which also had exposed XYZ vectors), they just carried those same values on the modifier from the chair. I was expecting them to read the default or last used values from the newly selected group instead.

Thanks for the work youā€™re doing on this! Itā€™s already a great tool!

1 Like

Ah I see, thanks for the response.

This is undoubtedly true and accurately represents my first experience with Houdini :joy:
Itā€™s a learning wall, which can be a negative, but just like an instrument, once you know it itā€™s to your power. Much like you I am a visual learner, so for me I usually work in AttributeVops, admittedly; but I do concede that the ability to create your own attributes pretty much opens a new dimension to a procedural workflow. Otherwise you are basically at the mercy of whatever the system give to you as tools, instead of being able to make your own ontop of what it gives.

I have asked back in the particle node project from something more visual with list attribute nodes:

So I feel you there, and perhaps we can get stuff like that in the future to expose things like vertex position etc, though Jacques wasnā€™t as keen on it. However I must admit I am glad that what I think is probably a more flexible workflow, is coming to blender even if it doesnā€™t click with me as much as a visual person. I think Iā€™d rather have the power, and just buckle up for a learning experience, as I did with Houdini. Itā€™s still all super early so letā€™s see!

1 Like

Thank you Hans for the thorough answers, I love having this kind of back and forth with you guys, hope thatā€™s not taking away too much of your time. In any case, hereā€™s a thing regarding attributes.

Many have stressed this in the past, but Iā€™ll mention it again (since this is happening now) : in motion graphics and visual effects, transferring attributes is basically the cornerstone of proceduralism. It can happen between arbitrary geometries and is useful in pretty much every use case. Now Iā€™m not too worried because we already have a ā€œdata transferā€ modifier thatā€™s super capable, I can only wish for it to support arbitrary attribute maps in the future, and be turned into a node.

Iā€™m thinking of a bunch of nodes we could have for manipulating attributes : transfer, fade, blur, interpolate, promote, mirror, rotate, remapā€¦ mostly self-explanatory. (I made up all those names!)
Some of those like ā€œtransferā€ and ā€œfadeā€ would be history-dependent, in that they solve their current state by looking up variables from the previous frame (attribute value of course, but also surrounding geometry). For instance, you want a footprint in the snow pushed by the boot of a character to not disappear once they lift their boot, or heat from a fire to accumulate and heat up a piece of wood, and have it keep burning after itā€™s taken outside the fire.

I understand Iā€™m very quickly diving into effects and particles, which I know are not a short-term goal anymore -but I imagine those are going to use the same system of attributes to command their behaviour. I donā€™t know whether ā€œhistory-dependentnessā€ is still a thing in ā€œgeometry nodesā€ , since weā€™re not talking about particles anymore, but I imagine it should be kept in mind.

Cheers !

6 Likes

One alternative that might satisfy having a regular sized node and exposing options to users could be an autocomplete like we have in the console. If the user pressed Tab and had a popup list of alternatives, or if there was only one option at this point, for the field to just take that option. Even better if the list could auto-update with any user-made attributes within the file. If that could be used anywhere there was an attribute field on a node then it would make the whole process much more approachable and learnable.

Hadricus, please be aware of my original remark in the post: ā€œDisclaimer: Remember to keep this topic Blender-only. Posts mentioning or sharing features from other software will be deleted.ā€

Oh thatā€™s right, sorry. And Iā€™m often the one to remind other people ! I reworded my message.

Cheers,

Hadrien

1 Like

Hello! I like the idea of the active state in modifiers, as it could potentially in the future give the possibility of copy and paste modifiers from one object to the other with ctrl+c/ctrl+v.
Iā€™m not totally sure about the change with hotkeys (x for delete, ctrl+a for apply and shift+d for duplicate), I use them a lot and here they donā€™t work anymore on hover but on the last selected modifier, making the process slower. While I can live with this, I found what I consider a bug of the active state.

As you can see, in a modifier with a long list of subpanels, the hotkeys donā€™t work anymore where I think the modifier doesnā€™t consider ā€œempty spaceā€, limiting a lot the use of hotkeys and asking the user to put extra attention on where to put the mouse arrow.

1 Like

I think this is just a bug. If the modifier was already the active you could have just used either shortcut.

Thanks. I still have to make clicks on panel headers set the active modifier. Planned though!

1 Like

Thatā€™s ludicrous tbh, In houdini itā€™s just a scatter and copy to points node.

edit: This is equivalent setup :

1 Like

Hi Edgan, welcome to the forum. This particular thread is a place to share the development of this functionality for the Blender project. How other software handle this is out of the scope of this discussion.

If you want to understand some of the design decisions or need some clarity on how to be involved and help the project please let me know.

Hi, one thing to keep in mind is that this kind of low-level setup can be bundled into a group so that in the end, itā€™s justā€¦ one node. :wink:

2 Likes

Quick question about Geometry Nodes in itā€™s first incarnation when it arrives master.

Will it support curves?

I mean, will we be able to sample curves apart from just geometry?

2 Likes

Got it, but i think itā€™s necessary to compare the development and already existing procedural toolsets in blender to what itā€™s already on the market, there is no need to reinvent the wheel, efficient geometry manipulation workflow is pretty much solved, and now the objective is to implement it in Blender.
Some of the fundamental things Iā€™d love to see are:

  • Procedural geometry grouping and data transfer.

  • The ability to reference parameters and attributes in node values directly.

  • Low level geometry and attribute manipulation trough code with the ability to iterate automatically over points,prims with point-cloud, minpos, neighbors, lerpā€¦etc functions

  • Ability to visualize, template, bypass each individual node as well as being able to inspect the data flow.

5 Likes

Not at first. Currently curves donā€™t even support attributes / vertex groups, etc, so there is some work to do there first.

1 Like

Sure, but we have concerns about intellectual property that prevent anyone to just directly mention another software here, which is why I got gently slapped on the wrist earlier (I described a bit too closely a piece of software which name begins with Hou and ends with dini).

Now for real, parameter referencing is just ā€œconnecting node socketsā€ in Blender, so thatā€™s an improvement already.

Node muting (=bypass) is planned afaik. I heard someone important talk about it.
Inspecting at arbitrary point in the node tree I pushed for earlier but didnā€™t have much echo. Yet I concur, this is important. I mean, while working with nodes you go back and forth a lot. You want to know what happens at the end of this node, of that node, etc. If you need to reconnect to an ā€œoutput meshā€ each time itā€™s going to be very tiring. I think anyone whoā€™s worked with nodes can appreciate that.
ā€œTemplateā€ I donā€™t know what that does though. Same as in the software that begins with Ma and end with ya ? ie makes data appear in dark overlay wireframe, unselectable so as to act as a sort of reference ?

And ideally yeah but I donā€™t think a language specifically for this is in the works unfortunately. Thatā€™s glitter though

1 Like