Working of Cloth Modifier

Cloth modifier when applied on 2 identical meshes with identical settings (same settings on UI) gives different results, the resulting meshes’ coordinates might differ at the last decimal or the first place decimal itself. I would like some help, I am trying to add unit tests for simulate modifiers.

Ideally that shouldn’t happen. Is the effect on the overal simulation that comparing them becomes impossible, or is there some reasonable threshold value that you could use to compare vertex positions?

The first suspect would be multithreading, you can try running with 1 thread (blender -t 1 and set environment variable OMP_NUM_THREADS=1) and see if it still happens then. The test could be run without multithreading, though it’s better if we can avoid having to do that. We do want simulations to give exactly the same results on different runs, so it’s something to be looked at.

If that doesn’t work, then finding the cause is probably more tedious narrowing down things, disabling features (like collision) or disabling code until you can pin down exactly where the result becomes different.

@brecht The results were same with single threading, so imo this is not the problem, the threshold value (in this case it is threshold_square) I reached is 0.05 (by hit and trial, it was failing at 0.01). The thresh value is a predefined macro (const) [ FLT_EPSILON ], so for testing should I send this as parameter? or is it not acceptable i.e. too large as compared to e-10 ( again thresh_sq)
Do you have any other ideas where the difference is coming ?

I noticed the problem when trying to test the cloth modifier too. I think the testing framework would benefit from having a threshold as a parameter (see D5857), even when the tolerance has to be much higher than floating point arithmetic precision.

I see two good reasons for that:

  1. if the mesh doesn’t visually change, then the artist won’t notice a difference and the test should pass
  2. if the modifier does have some randomness (e.g. caused by hashing pointers where the end result can be different depending on the machine blender is running on) it is better to have a regression test with high tolerance than having no test at all! This way at least trivial problems such as crashes or generating empty meshes can be caught early.

For future ref, I initially planned on ending a threshold value but couldn’t reach to a specific value but there is a workaround (D7017) usually the testObject and the expectedObject are at located at different places in the world coordinates but the Physics calculations are done on world coordinates, so if we duplicated the testObject and don’t change its position for the now called expectedObject, we can test the cloth modifier without a hitch.

1 Like

Did you mean the evaluatedObject and the expectedObject are at different locations? If so, we could simply create the evaluatedObject at the expectedObject’s location instead of the testObject’s location. Do you think that will solve the problem?

That’s good to know. But the problem I was having is that a test was passing only 80% of time or so. So most likely there was a randomness effect going on. That’s where selecting a higher threshold can be useful.

Yes evaluatedObject which is a duplicate of testObject, you could try testing again with this, the randomness coming in was probably due to floating point.

1 Like