@rpopovici I’m not sure what you’re getting at:
Blender’s architecture could be different/improved for sure, however in the case of object modifiers I don’t see how this would help.
As far as I can tell, changing design here would just be moving the problem around, not solving anything.
We could for arguments sake, allow modifiers to produce degenerate geometry (duplicate edges, polygons which use the same vertex/edge many times … etc).
Then tag the output has having certain degenerate cases and only correct them for consumers of the data which don’t support such cases.
This would have some advantages, allowing fast calculation of data which doesn’t need to be fixed.
Firstly, this is only a win if typical usage doesn’t require any correction, if triangulating the faces needs some correction, then whatever gain we get from avoiding extra calculation is just deferred, not avoided.
Secondly, we don’t avoid problems of data validity entirely. Producers of data need to properly tag all kinds errors they create, consumers need to ensure input is cleaned properly before it’s used.
We would still likely get crashes whenever a developer doesn’t tag/ensure the data properly - a problem similar to one you recently pointed out:
If we try and avoid this by assuming data is always degenerate, it will add a huge overhead for validating when it’s not necessary.
This also has the down side that a generic data correction/cleaning algorithm is likely to be slower than one which has knowledge about that the data which has just been created (as a modifier would).
You can check BKE_mesh_validate_arrays
which needs to do some fairly extensive checks.
As far as I can see, architecture changes here will only have minor advantages at best.