Files + workflows and the impact of geometry nodes design changes

During some of the recent proposals for the attribute socket changes, a few people were concerned about their existing files, and the time spent on creating training material. I hope to use this topic to clarify the situation.

Note that this is valid for any of the changes planned for geometry notes. Regardless of the solution implemented.

Disruptive Changes

Simulation solvers will be integrated directly with Geometry Nodes. That will make geometry nodes 10x bigger, and something to last more than 10 years. If all the work done so far (barely 6 months) is to be considered as a prototype, so be it. It would still have been worth it as a step to get things right.

This was one of the mantras I was following during the nodes workshop. That allowed us to contemplate the most radical ideas without falling into a sunken cost fallacy.


Geometry Nodes was never announced as experimental, or a prototype. And the reason is simple. Geometry Nodes is and has always been production-ready. It is been used extensively in Sprite Fright, and sparsely in other commercial projects.

Every single one of its usecases made Geometry Nodes production ready from day 1 (from its second week actually). The system was built incrementally, making sure what it supports, it does well.

And it was tried and proved in a production every step of the way. This is the definition of production-ready I’m abiding to.

Agile vs Design Upfront

The fact that some aspects of its design may change is the compromise of the “agile” approach the team is using. Some of the solutions we found were only possible because of the exploration we did. Validated (or refuted) by the production use cases.

The alternative would have been to make one monolithic design and follow it blindly until the system was fully finished. The everything nodes project started a bit like this, and this was literally the main change we did when we pivoted it into geometry nodes.

Blender 3, LTS and Backward Compatibility

If possible all the old files should be converted on file load without any loss. The resulting file may not be the one with the prettiest nodetree. But it should work like it used to.

If not possible, Blender 3 is the moment to do this change. Even if Blender 3.0 itself may have missing functionality compared to 2.93. Blender 2.93 LTS will be supported for 2 more years.

This is not a desirable effect, but it is a compromise I’m willing to make.

Training Material

Another concern some people have is in regard to their training material. Any current knowledge is directly applicable to the new attribute changes. But the specific ways of achieving the same effects will improve.

All the Blender development is done in the public. There are basically no secrets. This works as a double-edged sword sometimes. People create tutorials about features that are only available in alpha builds. Things that may change until they are fully mature. This is not unique to Geometry Nodes.

One can wait until a series is complete and then produce their documentation. The LTS releases at the end of the Blender series (2.93 LTS, 3.7 LTS, 4.7 LTS) are perfect candidates for this.

However if you are covering the day-to-day development of features it comes with a toll. I can only recommend fellow book and tutorial writers to provide context to their audience about the development process of Blender.

Finally keep in mind that no disruptive change is done lightly and in disregard to its impacts.


As long as you guys can make GN easier to get into for an average user, scrapping stuff should be fine.

The problem I’ve encountered a number of times is that anything beyond the simplest scattering results in hard to read and understand node trees, and while a lot of nodes were added, it definitely isn’t clear how to combine them in a way that would produce results.


This big change is fine, as long as the new system is clearly better and is not going to be scrapped again.

Thank you for posting this note! I think that if the communication is there (as it is), clearly explaining the reasons, it’s hard not to give full fledged support even if some existing tools/addons/tutorials may need to be updated with the improved system. All in for less linear approach to modelling and scattering in geometry nodes, whatever change that may mean

Thanks for the information :slight_smile:

Using the term “production-ready” here is a bit misleading…
when using such terms, there’s the assumption that it’s a mature and very polished tech, which is not the case yet :stuck_out_tongue: Geonode is still very young

Perhaps this was a marketing move?
If it is, well it was indeed very effective, geonode did a lot of buzz that wouldn’t happened if labeled as “early tech”.

Even if Blender 3.0 itself may have missing functionality compared to 2.93.

Could you please elaborate? or perhaps there’s too many uncertainties to clarify?

It’s a bit confusing to hear that, in the blog you seemed quite confident that all nodes could be converted to the new attribute system with ease? i suppose that these missing features would come from the fact that not all nodes will be converted to this new system at once?


There is no marketing move. I already explained the definition I have of production-ready. Feel free to disagree with it. But please refrain from elaborating on this further since this is not the topic of the conversation.

Could you please elaborate (…)

I believe in sharing the context of the decision making process. So basically I’m sharing what I would consider if push comes to shovel. Thus the generous use of if in the post.

in the blog you seemed quite confident that all nodes could be converted to the new attribute system with ease (…)

I am still confident that this is the case. Emphasis on could and how this speaks about the technical challenge though. Whether porting every node will be prioritized or not (it probably will) is a separate topic and it depends on the other targets we have for 3.0, and everything else that may require the team’s attention.


Thanks for taking the time to answer :slight_smile:

Anyhow, Good luck with the rework :clap: you guys are doing great work!


Great! I follow Blender since 1998. At the same time and during the same period of time, I worked with so many other applications and their evolution. So I experienced ups and downs, with bugs, wrong directions, or years of no evolution until either the company that was making the app got sold of disappeared and left us with projects and files useless and not transferable to other platforms… So as I follow Blender devtalk, more on an information and education level than resources for working with Blender, I got a renewing in my professional experiences in 3D production. Devtalk taught me so many inside bits helping me to better use the application but also be more efficient… So when the animation nodes came, in the same time that of Sverchok’s node I got at first surprised and not sure where all these were going to go? For sure animation nodes made animation much more capable and efficient. Sverchok I am not sure as it was more of industrial and architectural usages… Tell me if I am wrong here. However, based on these evolution shader nodes became a strategic tool allowing for fantastic manipulation of light, colors, materials and visual effects. In comparison to other commercial packages, I realized quickly, as well as my collaborators, that we were witnessing a new age in visual production and animation. Now we are stepping a new level of creating volumes, spaces and structures in 3D. Geometry nodes are bringing a promise of powerful tooling for building very complex structures all in keeping control of whole scene and objects within? Power comes with responsibility and knowledge. If you want to be able to develop in space a complex station, one need to have tools that allow for designing but also scaling and securing that volumetric proportions, dimensions and positions are controlled and applied. That will rely on understanding a new layer of 3D that is mathematics. One need to understand what is a vector, a variable, an exponent etc… Then it has the power of mastering large scale complex structure well beyond the cube. I feel that in the years to come, Blender will provide a standard for designing space stations, very large Earth based solar arrays, or new breeds of flying objects… I am convinced that Blender is the future of engineering design as well as artistic visual design for the young people starting with it today… Bravo to Ton and you devs for your realism, your hard work and consistence in progressing with the community and all its dreams… There is a long way to go but it is a fantastic venture.


Thanks for the input, @dfelinto . It is everything fair enough. If I may ask, do you guys have any date for rolling out the first iteration of GN 2.0, even from just a “finished” design point of view?

1 Like

Production-ready assumes that business can form a part of a production conveyor using some tech.
If some part of a conveyor is redesigned, efforts should be applied to rebuild it, starting from retraining specialists.
Even if it is not hard for individuals, isn’t it harmful for companies?

Also, if something sharing the same name has a redesigned approach (up to required mindset for using) has been a production-ready, then what we can’t call as a production-ready?

Geometry Nodes!
At first when the project came up, I was not sure about where that technology would fit in a global CG production space. Blender already has very strong and powerful tools to create and develop games, scenery, animation, architectural spaces, prototyping designs etc. So GN was for me conceptually attractive but I was, and still am, curious about how it would, and will work.
Then we have today access to a collection of nodes whose specificity are more parametric than functional. I explain with the following example :
Take a car engine, it is composed of a cylinder block, pistons and a gas distributor where atmospheric air is mixed with gas. The concept is now deep into our culture and understanding. However, most of the users, drivers are not going to open the hood and start to play with the components because they don’t know or understand how to deal with them. Mechanics will do the fixing but only following technical manuals specific to the model. If they cannot resolve a problem, then they have to call on the company making the car and have help from the engineers who designed the engine and its parts. The engineers have used scientific knowledge of materials, physics, chemistry and others integrated into mathematical models, to design the functioning of an engine so as to allow the engineers to design the parts, and the mechanics to build a functioning thermal engine. In the case of geometry nodes, we have the developers conceptualizing how certain tasks can be made generic within a context and the scope of a design (create a cube, a tube, a donuts…) But as the concept is symbolic, it needs to be translated into a usable tool by the artist. This is where we may have a gap. Artists in CG are working in visual space, from visual clues to final visual finished object(s). To create, quickly, objects and set them in a scene, artists are following both artistic directions and their own inspiration. From experience, conceptualization of 3D spaces is more of an artistic experience than abstraction of symbolic capacity. So using modifiers, artists can identify what parameter to use, and the scope of these parameters, to reach a specific effect or situation. With GN, it is a different story, at the present stage. First we have a collection of tools, the nodes, whose naming is not familiar to artists. Second, in opposition to the modifier process, GN nodes are to be implemented in a strict flow, from left to right, as I understand at this stage, and can be lined up in rows, in between a start and an end. One question is from an artist point of view, what if I did not place a given node at the correct position within the flow ? We cannot know. In opposition, we can move modifiers up and down and we can ‘see’ the results, or at the least foresee the impact on the final object. In GN, at this stage, artists are not, yet, able to control what they are expecting from the setup, and it is non intuitive to set up a GN with a clear goal to be reached because of the not so clear effect a field, a series of parameters will have on the final step? So at this stage, we can, maybe, consider that we are the dramatic turn of a generation, where experienced artists will keep working with tools they master, and young coming future artists will directly grasp the complexity of GN, the profusion of default parameters part of the nodes, associated with custom parameters allowed to modify and increase the granularity of a node within a flow… All this will probably lead to community developers creating mega GN by ‘wrapping’ sets of GN nodes into a predetermined user interface UI comparable to what modifiers are today but more powerful.
Just a thought exercise…

1 Like