Hi,
in the scope of my GSoC 25 application I have some questions on the Regression Testing briefs. Mainly regarding the current testing setups and the roadmaps to improve them.
I am trying to weigh up the usage usage of .blend files against any alternatives.
What is regarded as improvement vs unnecessary files/ complexity etc.
And maybe; Are there different opinions (pros/ cons) on that ?
-
.blend files are mostly preferred for test cases correct ?
So its easier to add, edit, delete tests cases (?) -
Is there a limit of .blend files ? how many are enough ? When are there too many ?
Is there a defined un-written limit of .blend files. e.G. when a good balance is reached between files added and cases covered for example ?
Could it be limited by the time it takes to run the tests ?
I was surprised how quickly a single headless Blender session, opens and executes a series of .blend files (*unless I got the wrong impression on how the setup works currently).
Are you aware of any limit that would be some kind of ceiling to stay away from ?
e.G. not more that 30 sec. to run on test module xyz. Or not more than 100 files etc. ? -
Is there motivation to move more towards a scripted setup ?
either by having those tests generated ? Or having those test exist as python code for example ? (*links to 1st question in a way)
I understand .blend case > .py case since its easier to add, edit, remove.
Would that be true even if .blend(s) are harder to track, review and maybe run a little slower (file opening) ?
Is there an unwritten .blend file when possible - scripted when necessary rule ?
With scripted I mean either a script that generates more temporary (?) blend files or holds test cases in code form that are harder to replicated in blend files (e.G. Node creation for example)