While in general I totally agree with your comment, in this case the situation is quite a bit different.
The tool was reprogrammed with the original patch as a reference, the original patch was solving the users requirements pretty well, and was fullfilling the tools description.
With that said it’s pretty clear that a rewrite of the tool was needed since some problems were found when that tool was used with complex meshes it seems, hence we have the rewritten tool.
There was also an addon that was doing a similar job, and also the experience of many users with other package that was the first one, up to my knowledge, to show this kind of functionality.
So in this case the tool expectations were pretty clear even before the rewrite work of the current tool was started.
It’s not a problem that a tool has some limitations, the problem comes when the limitations make the tool nearly useless, like a small toy besides what it was supposed to be, now the users come and try to use the tool, that BTW is being implemented in other packages too, with pretty clear examples of what the tool should be able to do, and then they found that it’s not only that it does not do what they expect, it’s that it generates weird geometry and situations that the would not expect at all.
IMH this tool, in it’s current status, should have never been deployed to any release, at least as a “release version”, maybe as an experimental version that required some work to eliminate those known limitations, that on the other side, were not clearly explained nor documented, up to my knowledge to, I may be wrong here.
So, if the tool does not even accomplish what its description says it should do, what you will get is a lot of users reporting bugs that in reality they may be limitations, but in practicality they are bugs, because the tool does not do what it’s expected to do.
Also the additional point to all this is that the solution planned seems to be a boolean based tool, which may be perfectly fine, as long as it happens in real time, as the users expect it to be, it’s not something impossible, it’s something already done, but I doubt it’s the best solution.
So if users don’t get what they expect to get, and there is no clear description of the limitations, and the tool is so much limited that it cannot even fullfill the most basic operations it’s expected to do, like the face inwards extrude, then the tools should not be in release.
Of course it’s my opinion, but there is a reason why it took you so long to commit the new boolean system, and I think it’s the very same reason why this tool should have never been released.
Finally it seems there has not been user feedback during the development process, or at least it was biased by the old patch or it was so small that no feedback was done.
User feedback is one of the most important things to release a tool, the developer is not doing the tool just for the shake of writing some cool code (and I know that’s not what is in the developer’s mind), the developer is programming a tool to make it useful for users, if no user is properly testing it and giving feedback, then the tool will be full of flaws and limitation, and the developer will be unable to see the problematic up until the tool is released, for that feedback to come a call is needed, and the experimental tools section in preferences is ideal for this situation I think.
To finish already, the " “known limitation” or “feature request”" dishonest feeling comes from lack of documentation, if the real functionality is properly documented, and the real limitations of the tool are properly documented, then expectations are under control and no “dishonest” feelings will be present, but the tool was presented as a very useful tool that was capable of what a general user would expect of such tool, and the limitations were not clearly exposed, then the user start reporting bugs, that are not bugs, are limitations, but then the bugs are defined as limitations, but the user sees them as bugs because no one told the user about such limitation and the description of the tool was completely different to what the user is experiencing.
I don’t know if I’m explaining myself right, the thing is that it’s not a matter of a good or a bad work by the developer, I think @mano-wii is doing a pretty good job, the problem is a communication problem with the user, and a testing/feedback problem with the feature.
You got tons of feedback of the booleans, there was kind of a “call” to test it, after the first patch by another developer, of this tool, I think there has not been any call to test and give feedback on this tool, so the user expect it to do at the very least, the same as the original tool from the original patch.
Ok, that’s it, too much text, may be someone better in condensing it, I’m not