GSOC'20 Draft Proposal: Regression testing


Himanshi Kalra


Calra on and 1


Regression testing helps in quickly detecting if any new features or patch break existing functionality. This helps the developers to test their patch while they are still working on it as opposed to testing during the last phase. Having automated testing at your disposal.


Right now, the way to add a test object is only using Blender itself, this is easy when there are less tests or when we would like to customize the mesh at every step. While this can work, adding a framework which can automatically add blend objects and give them befitting names as well will make testing more beginner friendly.


  • Writing framework for Automatic blend file generation.
  • Improving framework for testing Mesh modifiers.
  • Adding framework for Bone Constraint testing / Compositor automated testing.
  • Developer documentation.
  • In case there is some time left, testing bone constraints as mentioned in 5 under Regression Testing.

Project Details:

Automatic blend file Generator

The idea is to break the testing in 3 parts:

  • Generation of bare essential blend objects
  • Tweaking (changing the default values) as per user requirement inside Blender
  • Adding the test in the test file.
    We already do the last 2 steps, now it may seem contradictory that more time will be expended but consider a case where there are a lot of tests, I think it will make testing more efficient.

The above stated is one of ways we can achieve but the methodology would be more or less the same. The process would also involve compiling a list of meshes from the already existing set of tests.

  • Automatically generating a blend file
    • automatic generation of the blend object
    • naming of test and expected objects
    • adding them to a Collection

Improving Mesh Modifiers Framework

  • Extending framework for
    • Physics modifiers
    • Curve Modifiers
  • Revamping the code to give a TestName to each test

Compositor Automated Testing

Some of the work for Compositor tests has been started by Habib Gahbiche (D6334), and I would like to build upon the suggestions by Sergey.

Object / Bone Constraints

This framework will mostly mimic the framework used for testing of Mesh modifiers. I would initially build the framework keeping “Transform” constraints in mind and would extend accordingly for “Tracking” and “Relationship”.

Previous Contributions

I have submitted two patches one for Deform modifiers (D6620), and the other for Simulate modifiers (D7017). D7017 consists of tests for Cloth and SoftBody, I will add tests for the remaining Physics modifiers.

Project Schedule:

As per Academic calendar, the semester were supposed to end on 8th May but due to the Coronavirus outbreak, there is a degree of uncertainty as all colleges and universities are suspended. There won’t be much interference during the Gsoc program.
If I finish Deliverable 1 and 2 together, I would like to work on the framework for Bone Testing along with Compositor Testing.

Week 1: Discussing the structure of the framework for automatic naming with the mentor and fellow developers.

Week 2: Starting work on the framework for automatic naming. Deliverable 1

Week 3: Finish working and testing the framework.

Week 4: Writing tests for Curve Modifiers and modifying the framework.

Week 5: Extending framework for Physics Modifiers. Deliverable 2

Week 6: Finish testing Physics modifiers and Curve Modifiers.

Week 7: Discussion on compositor automated testing.

Week 8: Starting work on compositor automated testing. Deliverable 3

Week 9: Writing tests for compositor.

Week 10-11: Buffer for above three deliverables/Bone Constraint Testing.

Week 12: Developer documentation. Deliverable 4


Hey! I have been using Blender for more than 2 years now, my journey began by learning from Andrew Price’s Beginner Tutorial Series making a donut :slight_smile: I tried my luck at animation but I am better at modeling

Although I started with Blender because it was open-source (read as free) but eventually I fell in love with it and wanted to contribute towards making it better. I familiarized myself with the codebase of Blender and submitted my first patch D5610. I fixed a few minor bugs (D5744, D5867), submitted a patch for testing deform modifiers (D6620), and am currently working on above-mentioned D7017 (Simulate modifiers test).

I have studied C, Data Structures and Algorithm Analysis as a part of my 1st year curriculum. Eventually coding in C++ for competitive programming (Hackerrank, Codeforces, LeetCode) and have learned Python for Open-Source and ML (Machine Learning).


* I would like to discuss about what is to be tested in bone constraint?

1 Like

I like the overall idea, but I’m wondering whether your focus will be on covering all test cases (i.e. all bone constraints and all modifiers) or on writing frameworks that are as generic as possible and then generate test cases automatically?

I would like to focus on writing frameworks, I would like them to be as generic as possible but I think it would be complicating things in the case of for example Physics modifiers as each of them have their own different way of being used.

1 Like

It makes sense to make it as automatic as possible, ideally adding a test for a node or modifier can be done with just a single line in a Python script. But certainly there’s quite a few that need manual setup.

For the proposal, two points of feedback:

  • Deliverables mention a framework for testing mesh modifiers, but this already exists? Or is this about mesh operators, or improving the existing framework?
  • Project details section could use more detail about how you might tackle the different types of tests.
1 Like

Thanks for the feedback, extending as well as improving the framework mesh modifier to make it exhaustive i.e. to include Physics Modifiers, but we can’t yet test those modifiers which require direct human interaction-Surface Deform, Warp, Mesh Deform or complex meshes (Hook, Smooth Corrective).
I will update the project details soon.

1 Like

A few quick comments:

  • Automatic generation of object names may get out of hand quickly. For the tests I added recently I wanted to test more end-2-end scenarios so I chained multiple together (long names) and modified various parameters for each modifier like a normal user would
    • Would names like Cube_SubDiv_Level2_CatmullClark_with_Creases and Cube_SubDiv_Level2_Simple have to be created automatically?
    • For regression tests I named the objects with T##### in their name for easy reference AND I also tried to use the objects from any example .blend in the bug too. I think it’s critical that the framework going forward allows easy insertion of regression objects from real bugs into the mix (naming and otherwise).
  • How exactly are you going to handle the creation of object data like vertex groups and weights?
  • Curves with modifiers, based on some general searching on the tracker, are also ripe for testing. They seem like an easy win to include in this effort.
  • The hook modifier is actually testable without user interaction - you can get a deformation out of it by simply moving the empty that is set as the hook. I was going to enable a simple case once I get some time.
1 Like

I think each test should have a name, and the object name would be based on that.

Ideally I think most tests should be fully defined in a line of Python, getting one or more meshes from an available set of meshes and specifying the modifier and parameters to apply to that.

When a bug needs to be tested, I think the first thing to try should not be to use the specific object from the bug report, but rather to think how the tests could have been written so that this bug would have been caught.

For example if there is a specific type of topology that is problematic, it’s best to add that topology to a set of meshes that is used by most modifiers. Or if there is some issue with e.g. vertex colors on meshes, there should be a mesh with vertex colors that is tested with all modifiers.

There are cases where you need one specific mesh for a bug in one modifier. But in order to both catch more bugs and keep the test easier to maintain, I would try not to when possible.

1 Like

Yeah, I feel that keeping TestName and each object under test based on that name is good. Was just curious on any form of automatic naming that is being proposed. Similarly for “helper” objects like empties used for offsets and rotations, those should be obviously named consistently too.

Yes, in some cases I reduced the bug’s .blend down a bit further to simplify the test mesh. In a few cases, I increased the complexity of the bug’s test mesh to cover even more cases etc. But yes, a sufficient set of general, but interesting, set of meshes are definitely going to be required. Much more than a cube centered at 0,0,0 in world space.

The proposal should probably list out a good initial set of such test meshes and their attributes though. An initial list can be tediously gathered by the test so far though :).

The important part for me will be to not lose the ability to 1) test interesting chains of modifiers in realistic ways and 2) realistically cover some of the more common combinations of parameters in each modifier.

1 Like

Thanks for the feedback.

Yes, it might. Initially I was thinking to add this feature for primitive meshes only. With default values of modifiers applied, but as you suggested, I can compile a list of interesting meshes, one can perform the test on one of these or/and the primitive ones. The idea is to break the testing in 3 parts:

  • Generation of bare essential blend objects
  • Tweaking (changing the default values) as per user requirement inside Blender
  • Adding the test in the test file.
    We already do the last 2 steps, now it may seem contradictory that more time will be expended but consider a case where they are a lot of tests, I think it will be make testing more efficient.

And as far as naming is concerned I would like to propose to keep them simple like test_objectName_modifierName but we can keep them as detailed if one wants to set all attributes in the code itself.

I don’t understand this completely yet, I guess I will change my heading to semi-automation or partial automation, it’s like a head start, the difficulty I am seeing for now is how will I space out objects when there are a lot of them with different sizes?

I will definitely add them. Thanks :).
As for Hook modifier - I can’t remember now as for how I was approaching to test it, I will look into it soon.

Have a look at this work. Its a regression environment that tests addons using pytest. It can run against multiple versions of blender, including the nightlies. And it currently works on github actions and Travis CI

1 Like

Hello, I have worked with @mavek on the blender-addon-tester and am using it in super alpha-phase for a G’MIC addon for Blender for cross-OS, cross-blender versions testing. It is very practical to use (pip install blender-addon-tester ; from blender_addon_tester import test_addon). It could very easily work for Python’s builtin unittest suite as well in a few lines. I do no want to hijack the focus of this GSOC, as the target of blender-addon-tester is add-ons (and to smaller measure blender versions on any OS). I will definitely follow how the D6334 compositor testing patch mentioned above will be usable with blender-addon-tester (G’MIC is an image processing library).
So… giving early kudos for this Blender file generation framework and the rest of Blender core testing!! All my encouragements!!


Just a note on unittest versus pytest. pytest is pretty much the standard now, but is usually only implementable if you can do a pip install. People who didn’t have access to pip install usually had to use unittest because they have no choice.

In our flow, that @myselfhimself and I have put together, the pip install component of blender/python has been solved and pytest is in there now.

But it should be possible to drop back to unittest if required, the testing the tester work would need to be redone though. Rippling out the error from the guts of the python in blender all the way up the chain to a CI tool is a bit Inception like!

Thanks for sharing.

A lot of what blender-addon-tester offers is already available for Blender. For instance the current automated tests are part of the continuous integration tools of Blender.
Testing against multiple versions could be interesting, e.g. when Blender will support multiple LTS releases at the same time, but even then versioning the tests with git/svn could be enough.

I still want to have a closer look at blender-addon-tester but its focus does indeed seem different than the current tests in Blender.

At the risk of clobbering this post, I have start another post:

But yes, the testing flow for an addon, and being regressable, particularly the ability to run against multiple versions of blender, makes it a different beast than the testing flow already contained in the blender project.

Maybe the name of this post is a little generic and I am misdirecting the idea of the work.

Please have a look those and if you have any observations I would love to hear (at the link above :wink: )