Blender's Architecture (split from merge modifier discussion)

We can do it, it’s just that being both fast & maintaining correct state is not a trivial task.

It’s not simply about crashing either - take a duplicate edge or a two sided face for example. If you give this as input to other modifiers, they need to support this and give correct output. It’s possible but means we would need to add error checking in many places we currently don’t need it.

1 Like

@ideasman42 that is my concern as well. blender’s architecture is mostly monolithic and opaque.
Adding a simple feature like weld can expose latent vulnerabilities, which you have to deal with later.

In essence, pretty much everything in blender behaves like a ‘modifier’. Starting with object mode transformation and ending up with advanced animation features. You can do and undo most of this stuff. But the biggest problem here is the fact that all these features don’t share a common underlying API interface, there is no decoupling and modularization, and because of that, they are not easy to maintain and expand on. Introducing a new element into the picture that does not play nice with the other kids, can crush the application.
So, what plans do you have regarding this matter?
Do you intend to transition from this monolithic BAU-BAU to something more like a micro-architecture or things stay the same. Because if they do, concepts like ‘everything nodes’ will remain just a nice marketing strategy Ton can show around and that’s pretty much it.

  • A weld operation is only “simple” if you ignore practical constraints.
  • Modifiers are modular in the sense they don’t depend on global state (can run in threads and can be chained together) allowing them to be used in a node system.
  • I’m not sure what a micro architecture would mean in the context of an application that runs in a single process.

Further, your statements aren’t very concrete, if you like to help out it’s best you check on active projects and make suggestions for their design.

Or if you have more detailed suggestions, it’d be better to start a separate thread.

  • these ‘practical contraints’ are not suppose to have a strong binding agains the data you modify with weld
  • every ‘operation’ you perform on data should be like a modifier you describe here
  • you can have loose coupling with in-process not only for inter-process. Take a look at
    Somebody already wrote a prof of concept zmq integration with B

Sure, this is becoming off topic. I might be able to help with some ideas and then share it in a separate thread

1 Like

Hi @rpopovici, I don’t get how this should solve the problem.

The problem here is that a dataset that is being processed might get into a state where the result is somehow invalid, even if the operation steps itself were valid. Degenerated faces, silver faces, non manifold geometry, duplicated edges or faces. Datastructures as well as algorithms can have restrictions regarding the input. And these restrictions are there multiple times. The modifier stack is a chain of processingsteps on a dataset. So one of the concerns of ideasman is the final performance and data integrity.
Just when an algorithm is robust enough, that it produces valid geometry all the time, then there is no need to post- or precheck geometry to being valid.
And even if the weld modifier were allowed to produce invalid geometry and such a geometry check would exist and fail, how should the stack behave after such a geometry result being invalid. Is it ok to delete invalid geometry in parts or not, should the stack be canceled, how can it be fixed, how does this relate to other vertex attributes?
I don’t think that being coutious regarding this aspect is has something to do with blender being badly designed.

1 Like

@Debuk I know exactly what is happening with the data. It’s just data. If you can make something with it, good! If not, you can throw an error. The validity of the data is a subjective matter.
You can have only vertices with or without faces, you can have manifold and doubles an still to be considered valid by blender.
I wrote myself a small utility plugin which deals with overlappings in blender: GitHub - rpopovici/mesh-utils: Blender addons
The real problem here is the fact that blender is running a lot of stuff in the background, like ‘silent modifiers’ for example.
Things like these silent modifiers can crush blender if your data is not in an acceptable state.
Some silent modifiers in blender:

  • proportional editing
  • auto merge
  • live unwrap
  • key framing/animation
  • many others

All these features are acting like a modifier, but they cannot be accessed like a modifier, they don’t have a modifier interface. You, as a user, cannot decide how this type of modifier is being execute, turn it on/off, move it up or down the stack…
As you can see, there is already a weld modifier, for edit mode. The auto merge feature in edit mode is acting more or less like a weld modifier :slight_smile:

I would not use the phrase ‘bad design’, but it’s definitely not modern enough to support advanced stuff like everything nodes.

Hi @rpopovici, ok somehow we have quite divided opinions here. I don’t think that it’s a good decision to judge like you did on the integrity and quality of an algorithms result.
If I think about having such a modifier, that as you said will never crush blender, fine, but if I move 2 geometries releative to each other, eg in an animation, and in half of the animation the final mesh is there and everything is fine and in the other half it produced an handled internal error, because as you said the algorithm couldnt make anything with it, threw an error and the geometry has been dropped in that frame. Sure could I say that’s subjective matter, but the more convincing solutions are those who are robust as hell and produce “subjective good” geometry in most cases. And even if the resulting problem is not as dramatic as my example it might still be not acceptable. I think that was what ideasman was talking about.

Yeah, you are right, a generous datastructure will make decisions and implementations easier, how to handle things. But the result might still be crap, it’s judged by the user if that result is visually acceptable, even if its in the allowed structural boundaries of the mesh representation. Objects that have been 2manifold before the operation are most likely expected to be 2manifold afterwards too. And an algorithm of a modifier appearing as next one in the stack may have more severe constraints.
But I still can’t see how loose coupling will help here.
Even if you are relating to these silent modifiers, I still don’t get how that shall make it more robust but I am none of the devs and not the right one to comment on that.

As it’s about the current structure of the modifiers further up in this thread there were some self-critital comments by the one of the devs on the current modifier system and how its fitting or not fitting into the upcoming nodal system, so yeah I expect you will see some updates regarding this, and I guess if you have good ideas for that, its a good idea to open up a new thread for this., that might be very welcome. For the current thread it’s a bit offtopic I think, so I’ll stop here too.

Anything which, silently, mutates state is BAD by design.


I’d really like to agree with you. You somehow are touching aspects where things could be improved, so put it in an extra thread and invest some time in explaining what could be done and point out where you see the bad parts and that as precisely as you can. Generally keep the focus small or discussions tend to get out of hand.

For me it’s like we are carrying on two different discussions. When I read your lines I am stumbling across an attitude as if the quality of an algorithms result is unimportant and you were reframing some of ideasmans explanations, to make it sound as if the only goal for an modifier is not to crash, as if other requirements dont exist, and because of that the weld modifier is so easy to get right. But many aspects don’t get even touched by adding the loose coupling you are propoising.

I can’t see how a bug report on a crash relates to my argumentation, but it somehow is acceptable in the broader sense of what seems to be the thing you wanna discuss.
To react on your proposal, sure do the concepts of modularization and loose coupling concepts not exist without a reason, but performance tweaking can be beast and existing codeparts perhaps should be seen in that context. Performance gains are often on the opposite site of things like, readability or modularization and other aspects of clean, structured code.
Blender does exist since a long time. Reasons for a decision might have been different in the past, processing power has been lower, the need to reduce function calls has been higher, maybe the focus of the module was smaller, and yeah perhaps it just was a bad decision, but rewriting big parts of a code is timeconsuming and has impacts on other code as right now there are dependencies. I 'm pretty sure in every big software there are quite some things of that kind exisiting.

So generally be a bit more considerate that a dev will have to keep all this in mind and judge on this, when he’s arguing with you. You are ignoring other demands that exist and you should really put more emphasis on putting your statements in a context where others are able to follow.

1 Like

@Debuk hey, it’s meant to raise awareness not to be inconsiderate to the devs. That’s the last thing I want. That bug report just proves my point about what I was saying earlier about these silent features running in background. There is no reason to have a silent weld modifier exclusively for edit mode, and not make it into a real modifier. There is no performance penalty here, there is no modularization issue here. It was a design choice at that time, and frankly it doesn’t matter what it was. This is not going to work with ‘everything nodes’ in mind, if you can’t add a simple weld modifier without interfering with other stuff. That is my sole concern.
Btw, I am going to refrain from commenting here. I am preparing a paper with some improvements ideas and we can continue the discussion there


Isn’t everything nodes for modifiers an opportunity to sort this kind of issue out as well? What makes you believe this is not going to be looked into?
The points you are mentioning are inconsistencies or redundancies in Blender, I agree with you on that. At the same time, I don’t see how they would break everything nodes. Even if they were kept, it would be a missed opportunity, but it would not be that dramatic in my opinion.


@ideasman42 what kind of architecture do you have here, if you can’t add a simple weld modifier without crashing blender? I am wondering…

Actually Blender is the software that less crashes I see in years. So no so bad.

Note: discussion on architecture became off topic, split into new thread.

@rpopovici I’m not sure what you’re getting at:

Blender’s architecture could be different/improved for sure, however in the case of object modifiers I don’t see how this would help.

As far as I can tell, changing design here would just be moving the problem around, not solving anything.

We could for arguments sake, allow modifiers to produce degenerate geometry (duplicate edges, polygons which use the same vertex/edge many times … etc).
Then tag the output has having certain degenerate cases and only correct them for consumers of the data which don’t support such cases.

This would have some advantages, allowing fast calculation of data which doesn’t need to be fixed.

Firstly, this is only a win if typical usage doesn’t require any correction, if triangulating the faces needs some correction, then whatever gain we get from avoiding extra calculation is just deferred, not avoided.

Secondly, we don’t avoid problems of data validity entirely. Producers of data need to properly tag all kinds errors they create, consumers need to ensure input is cleaned properly before it’s used.
We would still likely get crashes whenever a developer doesn’t tag/ensure the data properly - a problem similar to one you recently pointed out:

If we try and avoid this by assuming data is always degenerate, it will add a huge overhead for validating when it’s not necessary.

This also has the down side that a generic data correction/cleaning algorithm is likely to be slower than one which has knowledge about that the data which has just been created (as a modifier would).

You can check BKE_mesh_validate_arrays which needs to do some fairly extensive checks.

As far as I can see, architecture changes here will only have minor advantages at best.