Proposal: Release Module Test Suites

Hi all,

Since Blender 2.80 we split off master and the release version branch very early on in the process. During this phase crashes should be fixed in a release branch and merged back to master. How ever without having any figures to back this up it was a concern that master was tested better than a release branch. Making issues that would have been solved in master not always end up in the release branch.

During the release of Blender 2.90 we saw that the quality of the release dropped where we needed a 2.90.1 and might need a 2.90.2 release. Also the release branches at that time isn’t tested except by running the automated test cases and do a does blender start and can we enter edit mode test. Back porting patches is development and things can go wrong. Especially as patches can still have effects that the developer who is performing the back ports cannot tell from the patch.

My proposal is to structure the testing after performing the back-ports so that module owners have control over the quality, but isn’t a heavy process.

  1. Each module owner has a test suite that can be directly found in the module page on developer.blender.org. This test suite contains a number of manual tasks that can be performed within a few minutes (front to back) that ensures that the basic part of the module is working.
    Tests that can be automated should be added to the test runners and not be part of the manual test suites.
  2. Have a breathing time between the last commit and the actual building of the release of at least two working days. This period is also for making sure that patches don’t need to be reverted and the quality of the final product can be tested. During this period the release engineer can go over all the module manual test suites.

An example of such a manual test suite could be to open a file, playback the animation and have some condition that must be met for the test to succeed. If a test fails that module will be contacted and the release will be postponed. New patches will not be added to the project, except if they are for solving a failing manual test. After fixing failing tests all tests will be redone. Release will only happen when all automated+manual tests work.

Another example is to recompile blender with a certain flag (WITH_OPENGL_DRAW_TESTS) and execute bin/tests/blender_test --gtest_filter=Draw*. This checks if all GLSL shaders can be compiled.

I think we should aim for a test script that can be executed within 5 minutes for each module. More testing can always be done by the module themselves. For this we should also communicate the state of a release.

Any feedback is welcome.

3 Likes

I hope we can avoid manual tests almost entirely in the end. It’s a good starting point to think, if I was given 5 minutes to manually test my module, what would I do? But then also consider how you could automate that, because it might not be as hard as it seems.

Makes sense.

This test could be automated right now on the buildbot I think, by enabling it on Linux only and using the software OpenGL libraries to run Blender.

3 Likes

This test could be automated right now on the buildbot I think, by enabling it on Linux only and using the software OpenGL libraries to run Blender.

If we’re just looking for a basic “but does it build” test for the shaders, i’d rather take on a dep on glslang (which i’m guessing we’re gonna need sooner or later anyhow for vulkan) and do the test in a platform independent way. In a similar way as is proposed for the cycles opencl kernels

Either way is fine with me. But I think the #1 priority is to compile and execute the kernels with actual AMD/NVIDIA/Intel drivers, everything else is kind of a stop-gap.