That’s absolutely not true, specially when we talk about physics simulation.
You can take Mantaflow as an example, or STORM, an specialised simulation software that is faster than Houdini in some solvers but tackles as complex scenes as Houdini, or Houdini itself.
Not everything can be parallelised, and when it can be parallelised, it’s not always faster.
Those are bold assumptions, for example, I have two main workstations:
1.- i7-5960X - 8 cores - 16 threads
2.- TR-2990WX - 32 cores - 64 threads
Mantaflow was even a bit slower in the TR than in the i7, it’s parallelisation was not goo to many-core CPU’s, after TBB was added it was improved, but while the improvement is there, the TR is not SUPER faster than the i7, with render it is, with sims, it is not.
The same goes for STORM, the author itself says that more than 10 cores will provide little improvement, and this happens also with Houdini and some solvers, it all depends.
The bathroom example is a bold but a good one.
Sometimes, some things are faster in single core, some times multithread can bring to the table just about a 10% of improvement for example, even rendering, which is the ideal for parallelisation, don’t have a 100% improvement when you stack CPU’s or GPU’s on it.
So, no, not everything can be parallelised, and not everything is parallelised, not in Houdini, not in Max, not in Maya and not in Blender, and no, multithreading is not always the performant solution, in fact sometimes it can generate a bigger bottleneck and make it even slower.
Regarding the cloth sim, if @LucaRood said it cannot be improved with threading, or it’s not worth the effort because it won’t bring much performance to the table, I’m with him, he knows what he speaks about.
If a new algo is required, it will have a warm welcome when it’s done, and then it will be state of the art.