Some way to improve point distribution would help. Merge-by-distance is an example.
Thinking about a distribution which creates accumulations and gaps. If someone likes to fully cover a ground with instances, he needs to create excessive instances to get rid of the gaps.
I think youāre right. But a lot of the code is actually still useful for this, and will be when we start working on particles too.
Right, I meant the data being passed between nodes.
For now weāre keeping the data and attributes stored in the same place, which means for now the attribute workflow will be the way to do this stuff. Personally I think the implicit connection between a list and the geometry might be a bit confusing. But using a string of attribute nodes could also nullify some of the benefit of a node based workflow, so I get there arguments for both ways. Iām sure these sorts of conversations will come up more when we focus on procedural modeling use cases.
Iām not sure about this one, Iāll bring it up with the team.
Yes, an important idea, though it needs quite a bit of design work. It potentially relates to something like this too: https://developer.blender.org/D8637
Theoretically the object info node could be used for this, it has a geometry output which could contain attributes.
Yes! Itās more complicated obviously but it does make sense. The largest complication is that objects could have a different evaluated type than they start with (or even multiple output object types), which complicates the fundamental object type -> object data relationship in Blender. This now works for the point cloud object, because it didnāt have any existing modifier evaluation system to convert. Applying a node modifier should create all the objects it needs to, potentially with shared data if there are instances.
As far as I know we havenāt talked about this. I see this as an orthogonal problem though. Evaluating the modifier and then exporting instance positions in an exporter / python would do the trick. I guess itās easy to see the possibilities with the spreadsheet editor concept I linked above though.
My experience for this kind of thing is mostly in Houdini; sorry to bother you but can I ask how you create attributes in sverchok/grasshopper if itās list management?
In houdini there are a bunch of nodes to deal with attributes, but with the āAttributeCreateā node you can manually type an attribute name and from that you can use it however you like.
In terms of list management there is āAttributeVopā, where if you click on it will drop into āintoā that node which exposes a monolithic node with several commonplace attributes such as vertex positions, vertex number, normals, color etc, I think there are 18 ones they deemed common enough to place in it. If you need to make your own attribute to use within this node, you just use āAddattributeā where you manually type a name and adjust settings for what it is supposed to be; it will also be available globally from other means. Anyways you can use AttibuteVop to do any number of operations from it, and itās in a completely āblender wayā, just dragging noodles from the attribute output form the monolithic node. Itās not any different from say the geometry node we have in shading.
That being said, Houdini also has āAttributeWrangleā where the node is just a text field. You just type vexcode into it, so you can call on existing attributes or create them there. Essentially AttributeWrangle and AttributeVop are the same thing, you can do the same thing in either. So effectively Houdini has both methods.
For my two cents, I do like typing the attributes out, it is a little cleaner since you donāt need to summon a node or multiple nodes all the time at the are of the operation that needs them, or do the classic blender move of dragging really long noodles from a single texture coordinate node to slightly save on efficiency. But much like houdini, I love the idea of having both ways to work.
In my workflow I donāt really use attributes. If I need to send data to shaders then Iāve used vertex colours but not attributes. Iām honestly not too aware of what is possible in Blender with attributes and whether you can just send any chosen data with any name. If that is possible then that would be extremely helpful in procedural modelling but you would still want your vert positions exposed so that you can do all the things that require them for calculations (instance objects to verts, custom matrix generation etc). In Sverchok vertex number is implicit just as the index of the XYZ vector in the list so no need to separate those properties arbitrarily.
There are some attribute nodes and obviously script nodes and things can be used to call different data but Iāve sort of avoided them. I find attributes off putting because, while no-doubt powerful, you canāt just look at a node and understand it if thereās a blank field expecting some string that you have to just know. It creates a barrier to being able to intuit your way through a problem because if you donāt know, you donāt know. Versus grabbing a list of vertices and checking a text output to see if you can see the pattern intuitively and then create the maths and list operations accordingly. Iām a super visual person though so for my workflow, seeing all the inputs on the front of the nodes and seeing the noodles helps me recognise patterns in data really quickly. Those properties panels separated from the node has always kept me from getting into Houdini and Substance Designer but I know thatās just a different workflow.
I made another little test scene to try out the Join Geometry node! Great stuff and much more performant without having all of those Boolean Unions everywhere!
I few things came out while I was working:
The Join Geo node unassigns materials. The Boolean Union was even preserving material indexes for things like different glass / frame materials but as soon as it went through the Join node, everything went back to clay.
I love the feature to be able to click on the modifier and have it jump you to the correct node tree! Very useful when scenes get more complex. It was a little bit too aggressive though I thought as I couldnāt actually change to look at any other node trees while the modifier was selected (blue outline). If it could let you view another node tree and just automatically deselect the modifier that would be great! Or even if the modifier just got the highlight around it to show that you were viewing the node tree associated with that modifier (and retain the click to focus feature).
A couple of functionality issues with the Vector Math nodes. Maximum, Minimum, Floor, Ceil, Fraction etc (all the right hand column options) seemed to have (0,0,0) outputs regardless of the inputs.
A few functions were missing from the scalar Math node also, I found Wrap wasnāt working but didnāt check for others.
You can see at the end of my video when Iām clicking through the various node groups / components that theyāre all kind of broken in their proportions. I need to confirm this but it looked like the exposed vector input on the modifier wasnāt being refreshed to reflect the default values of the selected node group. I fixed the proportions for the chair (an exposed XYZ vector on the modifier) and then when I loaded the table or the window (both of which also had exposed XYZ vectors), they just carried those same values on the modifier from the chair. I was expecting them to read the default or last used values from the newly selected group instead.
Thanks for the work youāre doing on this! Itās already a great tool!
This is undoubtedly true and accurately represents my first experience with Houdini
Itās a learning wall, which can be a negative, but just like an instrument, once you know it itās to your power. Much like you I am a visual learner, so for me I usually work in AttributeVops, admittedly; but I do concede that the ability to create your own attributes pretty much opens a new dimension to a procedural workflow. Otherwise you are basically at the mercy of whatever the system give to you as tools, instead of being able to make your own ontop of what it gives.
I have asked back in the particle node project from something more visual with list attribute nodes:
So I feel you there, and perhaps we can get stuff like that in the future to expose things like vertex position etc, though Jacques wasnāt as keen on it. However I must admit I am glad that what I think is probably a more flexible workflow, is coming to blender even if it doesnāt click with me as much as a visual person. I think Iād rather have the power, and just buckle up for a learning experience, as I did with Houdini. Itās still all super early so letās see!
Thank you Hans for the thorough answers, I love having this kind of back and forth with you guys, hope thatās not taking away too much of your time. In any case, hereās a thing regarding attributes.
Many have stressed this in the past, but Iāll mention it again (since this is happening now) : in motion graphics and visual effects, transferring attributes is basically the cornerstone of proceduralism. It can happen between arbitrary geometries and is useful in pretty much every use case. Now Iām not too worried because we already have a ādata transferā modifier thatās super capable, I can only wish for it to support arbitrary attribute maps in the future, and be turned into a node.
Iām thinking of a bunch of nodes we could have for manipulating attributes : transfer, fade, blur, interpolate, promote, mirror, rotate, remapā¦ mostly self-explanatory. (I made up all those names!)
Some of those like ātransferā and āfadeā would be history-dependent, in that they solve their current state by looking up variables from the previous frame (attribute value of course, but also surrounding geometry). For instance, you want a footprint in the snow pushed by the boot of a character to not disappear once they lift their boot, or heat from a fire to accumulate and heat up a piece of wood, and have it keep burning after itās taken outside the fire.
I understand Iām very quickly diving into effects and particles, which I know are not a short-term goal anymore -but I imagine those are going to use the same system of attributes to command their behaviour. I donāt know whether āhistory-dependentnessā is still a thing in āgeometry nodesā , since weāre not talking about particles anymore, but I imagine it should be kept in mind.
One alternative that might satisfy having a regular sized node and exposing options to users could be an autocomplete like we have in the console. If the user pressed Tab and had a popup list of alternatives, or if there was only one option at this point, for the field to just take that option. Even better if the list could auto-update with any user-made attributes within the file. If that could be used anywhere there was an attribute field on a node then it would make the whole process much more approachable and learnable.
Hadricus, please be aware of my original remark in the post: āDisclaimer: Remember to keep this topic Blender-only. Posts mentioning or sharing features from other software will be deleted.ā
Hello! I like the idea of the active state in modifiers, as it could potentially in the future give the possibility of copy and paste modifiers from one object to the other with ctrl+c/ctrl+v.
Iām not totally sure about the change with hotkeys (x for delete, ctrl+a for apply and shift+d for duplicate), I use them a lot and here they donāt work anymore on hover but on the last selected modifier, making the process slower. While I can live with this, I found what I consider a bug of the active state.
nodes_2
As you can see, in a modifier with a long list of subpanels, the hotkeys donāt work anymore where I think the modifier doesnāt consider āempty spaceā, limiting a lot the use of hotkeys and asking the user to put extra attention on where to put the mouse arrow.
Hi Edgan, welcome to the forum. This particular thread is a place to share the development of this functionality for the Blender project. How other software handle this is out of the scope of this discussion.
If you want to understand some of the design decisions or need some clarity on how to be involved and help the project please let me know.
Got it, but i think itās necessary to compare the development and already existing procedural toolsets in blender to what itās already on the market, there is no need to reinvent the wheel, efficient geometry manipulation workflow is pretty much solved, and now the objective is to implement it in Blender.
Some of the fundamental things Iād love to see are:
Procedural geometry grouping and data transfer.
The ability to reference parameters and attributes in node values directly.
Low level geometry and attribute manipulation trough code with the ability to iterate automatically over points,prims with point-cloud, minpos, neighbors, lerpā¦etc functions
Ability to visualize, template, bypass each individual node as well as being able to inspect the data flow.
Sure, but we have concerns about intellectual property that prevent anyone to just directly mention another software here, which is why I got gently slapped on the wrist earlier (I described a bit too closely a piece of software which name begins with Hou and ends with dini).
Now for real, parameter referencing is just āconnecting node socketsā in Blender, so thatās an improvement already.
Node muting (=bypass) is planned afaik. I heard someone important talk about it.
Inspecting at arbitrary point in the node tree I pushed for earlier but didnāt have much echo. Yet I concur, this is important. I mean, while working with nodes you go back and forth a lot. You want to know what happens at the end of this node, of that node, etc. If you need to reconnect to an āoutput meshā each time itās going to be very tiring. I think anyone whoās worked with nodes can appreciate that.
āTemplateā I donāt know what that does though. Same as in the software that begins with Ma and end with ya ? ie makes data appear in dark overlay wireframe, unselectable so as to act as a sort of reference ?
And ideally yeah but I donāt think a language specifically for this is in the works unfortunately. Thatās glitter though