Hi devs,
I’m wondering what are your plans regarding the evaluation of Nodes(or Empty) modifier.
Does Apply button make sense for Nodes modifier? If so, how would the result look like say in a simple case of having a cube’s instances(with randomized size and rotation) scattered over a plane. After hitting Apply button, would the result be a single object with cube’s instances in its mesh data or every instance as a separated object(cube with randomized size and rotation)?
Also, let’s say I have a forest generator made with gNodes and I need to export forest(individual trees) into a game engine as an array of [asset_name,transformation_matrix]. Will the only possibility be to do it the “classical way” with python api, like particle system, where I evaluate PS and then get particles position? Or do you have some other solution in mind?
Some way to improve point distribution would help. Merge-by-distance is an example.
Thinking about a distribution which creates accumulations and gaps. If someone likes to fully cover a ground with instances, he needs to create excessive instances to get rid of the gaps.
I think you’re right. But a lot of the code is actually still useful for this, and will be when we start working on particles too.
Right, I meant the data being passed between nodes.
For now we’re keeping the data and attributes stored in the same place, which means for now the attribute workflow will be the way to do this stuff. Personally I think the implicit connection between a list and the geometry might be a bit confusing. But using a string of attribute nodes could also nullify some of the benefit of a node based workflow, so I get there arguments for both ways. I’m sure these sorts of conversations will come up more when we focus on procedural modeling use cases.
I’m not sure about this one, I’ll bring it up with the team.
Yes, an important idea, though it needs quite a bit of design work. It potentially relates to something like this too: https://developer.blender.org/D8637
Theoretically the object info node could be used for this, it has a geometry output which could contain attributes.
Yes! It’s more complicated obviously but it does make sense. The largest complication is that objects could have a different evaluated type than they start with (or even multiple output object types), which complicates the fundamental object type → object data relationship in Blender. This now works for the point cloud object, because it didn’t have any existing modifier evaluation system to convert. Applying a node modifier should create all the objects it needs to, potentially with shared data if there are instances.
As far as I know we haven’t talked about this. I see this as an orthogonal problem though. Evaluating the modifier and then exporting instance positions in an exporter / python would do the trick. I guess it’s easy to see the possibilities with the spreadsheet editor concept I linked above though.
My experience for this kind of thing is mostly in Houdini; sorry to bother you but can I ask how you create attributes in sverchok/grasshopper if it’s list management?
In houdini there are a bunch of nodes to deal with attributes, but with the “AttributeCreate” node you can manually type an attribute name and from that you can use it however you like.
In terms of list management there is “AttributeVop”, where if you click on it will drop into “into” that node which exposes a monolithic node with several commonplace attributes such as vertex positions, vertex number, normals, color etc, I think there are 18 ones they deemed common enough to place in it. If you need to make your own attribute to use within this node, you just use “Addattribute” where you manually type a name and adjust settings for what it is supposed to be; it will also be available globally from other means. Anyways you can use AttibuteVop to do any number of operations from it, and it’s in a completely “blender way”, just dragging noodles from the attribute output form the monolithic node. It’s not any different from say the geometry node we have in shading.
That being said, Houdini also has “AttributeWrangle” where the node is just a text field. You just type vexcode into it, so you can call on existing attributes or create them there. Essentially AttributeWrangle and AttributeVop are the same thing, you can do the same thing in either. So effectively Houdini has both methods.
For my two cents, I do like typing the attributes out, it is a little cleaner since you don’t need to summon a node or multiple nodes all the time at the are of the operation that needs them, or do the classic blender move of dragging really long noodles from a single texture coordinate node to slightly save on efficiency. But much like houdini, I love the idea of having both ways to work.
In my workflow I don’t really use attributes. If I need to send data to shaders then I’ve used vertex colours but not attributes. I’m honestly not too aware of what is possible in Blender with attributes and whether you can just send any chosen data with any name. If that is possible then that would be extremely helpful in procedural modelling but you would still want your vert positions exposed so that you can do all the things that require them for calculations (instance objects to verts, custom matrix generation etc). In Sverchok vertex number is implicit just as the index of the XYZ vector in the list so no need to separate those properties arbitrarily.
There are some attribute nodes and obviously script nodes and things can be used to call different data but I’ve sort of avoided them. I find attributes off putting because, while no-doubt powerful, you can’t just look at a node and understand it if there’s a blank field expecting some string that you have to just know. It creates a barrier to being able to intuit your way through a problem because if you don’t know, you don’t know. Versus grabbing a list of vertices and checking a text output to see if you can see the pattern intuitively and then create the maths and list operations accordingly. I’m a super visual person though so for my workflow, seeing all the inputs on the front of the nodes and seeing the noodles helps me recognise patterns in data really quickly. Those properties panels separated from the node has always kept me from getting into Houdini and Substance Designer but I know that’s just a different workflow.
I made another little test scene to try out the Join Geometry node! Great stuff and much more performant without having all of those Boolean Unions everywhere!
I few things came out while I was working:
The Join Geo node unassigns materials. The Boolean Union was even preserving material indexes for things like different glass / frame materials but as soon as it went through the Join node, everything went back to clay.
I love the feature to be able to click on the modifier and have it jump you to the correct node tree! Very useful when scenes get more complex. It was a little bit too aggressive though I thought as I couldn’t actually change to look at any other node trees while the modifier was selected (blue outline). If it could let you view another node tree and just automatically deselect the modifier that would be great! Or even if the modifier just got the highlight around it to show that you were viewing the node tree associated with that modifier (and retain the click to focus feature).
A couple of functionality issues with the Vector Math nodes. Maximum, Minimum, Floor, Ceil, Fraction etc (all the right hand column options) seemed to have (0,0,0) outputs regardless of the inputs.
A few functions were missing from the scalar Math node also, I found Wrap wasn’t working but didn’t check for others.
You can see at the end of my video when I’m clicking through the various node groups / components that they’re all kind of broken in their proportions. I need to confirm this but it looked like the exposed vector input on the modifier wasn’t being refreshed to reflect the default values of the selected node group. I fixed the proportions for the chair (an exposed XYZ vector on the modifier) and then when I loaded the table or the window (both of which also had exposed XYZ vectors), they just carried those same values on the modifier from the chair. I was expecting them to read the default or last used values from the newly selected group instead.
Thanks for the work you’re doing on this! It’s already a great tool!
This is undoubtedly true and accurately represents my first experience with Houdini
It’s a learning wall, which can be a negative, but just like an instrument, once you know it it’s to your power. Much like you I am a visual learner, so for me I usually work in AttributeVops, admittedly; but I do concede that the ability to create your own attributes pretty much opens a new dimension to a procedural workflow. Otherwise you are basically at the mercy of whatever the system give to you as tools, instead of being able to make your own ontop of what it gives.
I have asked back in the particle node project from something more visual with list attribute nodes:
So I feel you there, and perhaps we can get stuff like that in the future to expose things like vertex position etc, though Jacques wasn’t as keen on it. However I must admit I am glad that what I think is probably a more flexible workflow, is coming to blender even if it doesn’t click with me as much as a visual person. I think I’d rather have the power, and just buckle up for a learning experience, as I did with Houdini. It’s still all super early so let’s see!
Thank you Hans for the thorough answers, I love having this kind of back and forth with you guys, hope that’s not taking away too much of your time. In any case, here’s a thing regarding attributes.
Many have stressed this in the past, but I’ll mention it again (since this is happening now) : in motion graphics and visual effects, transferring attributes is basically the cornerstone of proceduralism. It can happen between arbitrary geometries and is useful in pretty much every use case. Now I’m not too worried because we already have a “data transfer” modifier that’s super capable, I can only wish for it to support arbitrary attribute maps in the future, and be turned into a node.
I’m thinking of a bunch of nodes we could have for manipulating attributes : transfer, fade, blur, interpolate, promote, mirror, rotate, remap… mostly self-explanatory. (I made up all those names!)
Some of those like “transfer” and “fade” would be history-dependent, in that they solve their current state by looking up variables from the previous frame (attribute value of course, but also surrounding geometry). For instance, you want a footprint in the snow pushed by the boot of a character to not disappear once they lift their boot, or heat from a fire to accumulate and heat up a piece of wood, and have it keep burning after it’s taken outside the fire.
I understand I’m very quickly diving into effects and particles, which I know are not a short-term goal anymore -but I imagine those are going to use the same system of attributes to command their behaviour. I don’t know whether “history-dependentness” is still a thing in “geometry nodes” , since we’re not talking about particles anymore, but I imagine it should be kept in mind.
One alternative that might satisfy having a regular sized node and exposing options to users could be an autocomplete like we have in the console. If the user pressed Tab and had a popup list of alternatives, or if there was only one option at this point, for the field to just take that option. Even better if the list could auto-update with any user-made attributes within the file. If that could be used anywhere there was an attribute field on a node then it would make the whole process much more approachable and learnable.
Hadricus, please be aware of my original remark in the post: “Disclaimer: Remember to keep this topic Blender-only. Posts mentioning or sharing features from other software will be deleted.”
Hello! I like the idea of the active state in modifiers, as it could potentially in the future give the possibility of copy and paste modifiers from one object to the other with ctrl+c/ctrl+v.
I’m not totally sure about the change with hotkeys (x for delete, ctrl+a for apply and shift+d for duplicate), I use them a lot and here they don’t work anymore on hover but on the last selected modifier, making the process slower. While I can live with this, I found what I consider a bug of the active state.
As you can see, in a modifier with a long list of subpanels, the hotkeys don’t work anymore where I think the modifier doesn’t consider “empty space”, limiting a lot the use of hotkeys and asking the user to put extra attention on where to put the mouse arrow.
Hi Edgan, welcome to the forum. This particular thread is a place to share the development of this functionality for the Blender project. How other software handle this is out of the scope of this discussion.
If you want to understand some of the design decisions or need some clarity on how to be involved and help the project please let me know.
Got it, but i think it’s necessary to compare the development and already existing procedural toolsets in blender to what it’s already on the market, there is no need to reinvent the wheel, efficient geometry manipulation workflow is pretty much solved, and now the objective is to implement it in Blender.
Some of the fundamental things I’d love to see are:
Procedural geometry grouping and data transfer.
The ability to reference parameters and attributes in node values directly.
Low level geometry and attribute manipulation trough code with the ability to iterate automatically over points,prims with point-cloud, minpos, neighbors, lerp…etc functions
Ability to visualize, template, bypass each individual node as well as being able to inspect the data flow.