Hi all,
Since Blender 2.80 we split off master
and the release version branch very early on in the process. During this phase crashes should be fixed in a release branch and merged back to master
. How ever without having any figures to back this up it was a concern that master was tested better than a release branch. Making issues that would have been solved in master not always end up in the release branch.
During the release of Blender 2.90 we saw that the quality of the release dropped where we needed a 2.90.1 and might need a 2.90.2 release. Also the release branches at that time isn’t tested except by running the automated test cases and do a does blender start and can we enter edit mode test. Back porting patches is development and things can go wrong. Especially as patches can still have effects that the developer who is performing the back ports cannot tell from the patch.
My proposal is to structure the testing after performing the back-ports so that module owners have control over the quality, but isn’t a heavy process.
- Each module owner has a test suite that can be directly found in the module page on developer.blender.org. This test suite contains a number of manual tasks that can be performed within a few minutes (front to back) that ensures that the basic part of the module is working.
Tests that can be automated should be added to the test runners and not be part of the manual test suites. - Have a breathing time between the last commit and the actual building of the release of at least two working days. This period is also for making sure that patches don’t need to be reverted and the quality of the final product can be tested. During this period the release engineer can go over all the module manual test suites.
An example of such a manual test suite could be to open a file, playback the animation and have some condition that must be met for the test to succeed. If a test fails that module will be contacted and the release will be postponed. New patches will not be added to the project, except if they are for solving a failing manual test. After fixing failing tests all tests will be redone. Release will only happen when all automated+manual tests work.
Another example is to recompile blender with a certain flag (WITH_OPENGL_DRAW_TESTS
) and execute bin/tests/blender_test --gtest_filter=Draw*
. This checks if all GLSL shaders can be compiled.
I think we should aim for a test script that can be executed within 5 minutes for each module. More testing can always be done by the module themselves. For this we should also communicate the state of a release.
Any feedback is welcome.