Just another selection group, it selects everything in a vertical range.
I played a little bit with the curves to draw a Gizmo helper to see what you are doing. The node needs the geometry to be passed through to add the Gizmo visualization, but it works just with the selection output as well. It would be great if every node (Shader and Comp nodes too) could have wiewport gizmos built in. I always find myself fighting with the texture mapping node, because of the lack of visual hint for the transformations.
Just another selection group, it selects everything in a vertical range.
Yes, this is exactly the type of functionality I was missing!
Can I ask how you built the node group? Just because I want to know if now that fields have been introduced we can stop relying on vertex groups and having to subdivide a certain number of times the mesh to have enough density, and using things like the attribute proximity. Thanks!
(I needed to create sharp edges on every seam to make it work.)
Currently, Blender does have …
Seam Edges (Edge … Bool)
Sharp Edges (Edge … Bool)
Edge Creases (Edge … Float)
Vertex Group Weights (Vertex … Float)
Vertex Colors (Vertex … Color)
Shape Keys (Vertex … Displacement Vector?)
Face Maps (Face … Bool?)
UV Maps (Vertex → Vector v with v.z=0)
It looks like a user could create his own Attributes.
I wonder, might there be a plan to unify all of those
(Seam Edges, Sharp Edges, Edge Creases, Vertex Group Weights, Vertex Colors, Shape Keys, Face Maps, Bevel Weights, UV Maps, …),
in a system that manages user attributes?
yeah i read about an attribute editor in the past,
it has been discussed
i did tried to search a link for you but didn’t find anything
not sure if beginners out there will appreciate removing vcol and vgroup for a bunch of ‘Floats’ and ‘Booleans’ tho
Here it is:
Sorry, It’s a bit messy, because I omitted the Gizmo drawing part, but the selection logic is pretty simple, just check if the position Z is between an upper and lower threshold, plus an offset.
Yeah That’s what I hope. I think It would be nice to just abstract vertex groups as point float attributes, vertex colors as face corner color attributes etc. As @BD3D said, maybe it would complicate a little bit things for beginners.
Anyway, that unwrapping is Cool! For a moment I thought that there was a node to unwrap a mesh, I can’t wait to have UV nodes as well!
Another selection node group, select inside bounds, and supports rotation, the gizmo display this time is just passed as geometry output to be joined outside the group (check the link to another post) :
I don’t really care about the gizmo implementation being robust right now, it’s just a hack to visualize what you are doing, I hope in future gizmo drawing will be a built-in function of nodes.
And a sphere selection group:
Yes!!! Proper visual fallofs, I absolutely push for nodes’ gizmos as well!!!
Trying generic mesh selection, I’m not sure if I’m making correct assumptions to check if a vertex is inside another mesh (just using raycast and checking the sign of the normals dot products):
I believe T89054 is the closest to what you’re saying.
Thank you for the link. That discussion covers my question!
I really like this proposal. It seems like it will make the fields work much better. I really dig the benefit of more versatile node groups.
One thing you may not be considering: When has a new feature not been met with endless complaints from the community? When this noise is presented to the devs with every new feature or change, the only outcome is that the devs must necessarily ignore the noise in order to be able to complete a task. I think with GeoNodes being such a new direction for Blender, any initial design of such a massive undertaking would not be perfect.
I’m, as usual, impressed with the speed that the devs were able to pivot to an improved approach. Think of how much better this system is than when it was released only as a new 2.92 feature SIX MONTHS ago. It more than a decade for Autodesk to fix some of the issues with its node-based particle system.
Thanks for sharing. I hope this kind of node group to be shipped with Blender. There is no way an average user would know how to create this.
Oh that’s brilliant, you reinvented the old depth testing and stuff. I didn’t think of tackling the problem like this. And the performance seems really good too, on this hi-res Suzanne.
That’s pretty rude. I guess you never used Jaques’ previous work Animation Nodes. He very obviously knows “what he’s doing.”
I didn’t really hate the attributes workflow. I really liked the fact that you could reuse data really far down the node tree without having to actually make a physical connection between the nodes. It was innovative.
The fact is that you can’t be innovative without failures along the way, but you wouldn’t know anything about that. Stings a little, no? My point is this: stop making the mistakes and unpopular decisions the programmers make into personal attacks on you and your workflow. They’re doing this out of passion and working hard to innovate on workflow and features. Fortunately for you, there is no one looking over your shoulder watching all of your mistakes made along the way to completing a project. Otherwise, you too would be subject to people publicly shaming you for not knowing what you’re doing.
That was my point. Jacques’ Fields proposal (which I very much love) was actually what set the GN development on the right track, but AFAIK the original system, from the user interface side of things was designed by Dalai, not Jacques.
If I am wrong, and Jacques was responsible for the design of the current GN design that’s in master, not Dalai, then I am sorry, and I take it back. But somehow I doubt Jacques would come up with a design like that, considering how great of a job he did on Animation Nodes.
And if I am right, then I struggle to understand why he wasn’t the one tasked to design GN from the front end side of things.
The original solution in 2.91 to 2.93 was designed behind the closed doors based on arguably subjective requirements of just Blender Animation Studio. Then, without any external sanity check, the workflow and UI was designed, and then, it was released into master branch and called production ready.
There was a failure at 4 individual points in time:
- Design was based on a sample, test use case from Blender Animation Studio project. The example use case was not open to any external feedback which could avoid potential insufficiencies of the design.
- Based on that, front end was designed to cover this limited use case, again closed off the any significant external feedback that impact the design significantly.
- This design was then merged in master and included in official Blender build, not even as experimental feature, so some people already rely on it (with possibility of breaking changes in 3.0)
- After the release, it was claimed to be production ready.
This isn’t matter of a mistake, but rather questionable process. The development could have gone smoother if it was caught at any of these 3 stages:
- While designing the system, users would probably spot early on that the named string attribute exclusive workflow would be slow and limited for large amount of use cases.
- If users had a chance to play with the working feature in beta phase with enough time before it’s in master, they’d be able to spot the same.
- If it was in master, but had been explicity marked as experimental, instead of production ready, users could be vary to not rely on it much in production, and therefore backwards compatibility would not have to be such a strong consideration for 3.0
Even then, going so fast from nothing to a supposedly production ready feature included in master would not be such a big deal, if this feature was not as major as a whole new node based content creation paradigm, which is supposed to prepare Blender for next decade or two.
So I stand behind not being able to understand how anyone could think rushing a design of something as big and significant as this into master in production ready form was a good idea.
Just to be clear. We designed this together as a team. It was not just Dalai and not just me and also not just the two of us.
I might comment on the rest of the discussion later when I feel like it.
In that case I am even more confused.
The reason I made all the assumptions I did, is that I have a hard time imagining that releasing the design that’s currently in the master as production ready, then having two different competing proposals, both of which mean breaking changes to already supposedly production ready design, then choosing one and making tough decisions about backwards compatibility was something the team has intended all along.
But at the same time, given your history of Animation Nodes and Particle Nodes design, I guess you already had something along the lines of Fields design in your mind back when you were working with the rest of the team on the original first GN iteration proposal.
So this is just puzzling to me.
Note that the combination of attributes nodes + previously planned attribute processor is similar to other applications (for example Houdini geometry nodes + VEX nodes). You might think fields are obviously better, and they are great especially for those familiar with shader nodes. But there are real trade-offs, other applications have made different choices with success. And with the addition of the attribute processor the usability would have improved, in a different way. So let’s not pretend that it’s obvious what the right choice is.
I never did, but I admit I made a mistake when formulating the post that started all this. Reading it back, it comes off as criticizing just the design alone. I should have been more clear that I criticized combination of rapid experimental design process with putting its result in the production ready official Blender releases so far.
I personally would have no problem with the text attribute based GN design evolving if it wasn’t presented as a final, production ready feature so fast. If we both agree that it’s not obvious what the right choice is, then wouldn’t it make more sense to keep the feature in the experimental state for at least a bit longer, until the design team figures at least what the “better” choice is?
The idea is that it’s not just the design team that figures out the right choice, but also the feedback from users using this in production. I can think of various examples where 3D apps released a complex feature like this, some early adopters use it in production immediately since it solves a real need, others wait for it to stabilize and become production ready for them.
How precisely this should be communicated can be debated, I personally would reserve the term “production ready” for when a feature has had a few releases to stabilize. But in the end I don’t think that makes a big difference or affects the development planning.