Add-on, pytest/unittest and TravisCI integration example



Can any one point me in the direction of some good test example for maintaining add-ons?

I was hoping to be able to write a series of tests to allow smoother migration of add-ons. Ideally the tests should be called straight from the command line (not console) and can be run an multiple versions of blender. (2.79 and 2.80 right now) using a continuous integration tool like TravisCI to catch when things break. My ultimate goal is to have a series of tests ready for 2.80, at this point it is 2.80 that is the component that is changing the most, so I was hoping to open up some visibility into it.

I know unittest comes with blender, however pytests seems to be the tool of the future (I have used pip to install a local version of pytest). Currently I cannot get either to pick up any tests. I would usually start googling at this point but this type of problem does not seem to come up enough to solve.


I have been just looking at the exact same thing and found this:

However, for my case I also needed to install ffmpeg to perform all my tests and I couldn’t get blender installed alongside with ffmpeg with apt-get.
I have found a different approach to installing blender for tests in Travis CI (I can’t remember where exactly I copied bits from, there were few other repos I found doing a set up for different things):

You can see my file here:

Then I have a script that runs all tests and collects coverage information.

As you can see, I personally don’t use pytest and rely just on built-in unittest Python modules.
However I do install things through pip, so it should be straightforward to just install and run pytest.

If you get to point where you have pytest already installed, but it doesn’t work could you share some sample code you have and file layout? Have you tried running it locally (and not just on Travis CI)?


That is helpful, but not all the way there. I think the pip here is for the python running on the host system and not the python that is used by blender, which is where pytest needs to be installed to. The way you are using python is to collect results after the blender component has run, perfectly valid, just one has to remember to keep the two separate in your head.

Also not part of your brief, but I am also looking to be able to run multiple versions of blender and collect results like that. I will of course try and get it working with one first. :wink:

Apart from that that is a very helpful set of scripts, I will be borrowingly heavily from that for my travisCI work.

I have had success on the pytest front though. On another forum I asked the same question as was able to get my testing bootstrapped. I needed to clean up my work and hopefully put it some where that people can leverage off of.


Your issue with pytest sounds similar to my issue with coverage.

I have solved it by having my test script:

  • work out where Python modules I need (coverage) in my case is (using just simple import coverage; coverage.__file__)
  • construct command line to launch blender in headless mode and passing test script itself as script to execute + passing the paths that I need
  • when my test script sees that sys.argv is under blender and with extra paths it injects paths from command line (the ones worked out in first step) to sys.path and starts coverage collection (import coverage; coverage.process_startup())

It is not beautiful, but works well. I think that in most cases Python embed into Blender will work just fine with Python modules installed for system/standalone Python (especially true if modules are Python only without compiled C extensions which could have ABI differences).

You can see the code I am talking about above in
In below launch_tests_under_blender is run as main() with invocation of python It works out what the paths are and such, then constructs a command line to rerun itself under Blender.
Then run_tests is effectively main(), but at this point we are already running in Blender.

def run_tests(args):
    extra_pythonpath = args[1]
    sys.path.append(extra_pythonpath)"Appending extra PYTHONPATH %s", extra_pythonpath)
    import coverage

    # I split this into separate function to increase coverage
    # ever so slightly.
    # I am not clear why, but it seems that coverage misses out on lines
    # within the same function as coverage.process_startup() got called.
    # Caling into another function seems to help it.

def launch_tests_under_blender(args):
    import coverage
    blender_executable = args.pop(1)
    ffmpeg_executable = args.pop(1)
    coverage_module_path = os.path.realpath(os.path.dirname(os.path.dirname(coverage.__file__)))
    cmd = (
        '--python', os.path.abspath(__file__),
    ) + tuple(args[1:])'Running: %s', cmd)

    env = dict(os.environ)
    env['BLENDER_USER_SCRIPTS'] = os.path.realpath('scripts')
    env['PYTHONPATH'] = coverage_module_path
    outdir = os.path.realpath('tests_output')
    subprocess.check_call(cmd, cwd=outdir, env=env)

    'test': launch_tests_under_blender,
    'run': run_tests,

def main():
        args = sys.argv[sys.argv.index('--') + 1:]
    except ValueError:
        args = sys.argv[1:]



Well I solved my pytest install a different way. I used the python executable inside blender explicitly to install pip:


(be careful doing this on windows as it is under “Program Files”, this means that the regular user will not have write permissions, change permissions or run as admin, under linux it should be fine)

and then used the pip that got installed to install pytest.

Blender\2.79\python\Scripts\pip install pytest

Later inside blender code I was able to import pytest in the normal manner


I got it all working. I have a setup that will run a basic test against a addon daily on the nightly builds of both 2.79 and 2.80.

The addon is as simple as it can get and still be called an addon and all that is been tested is the reported version.

If anyone in interested in getting something similar going you can look at my work here:

And here are where the travis logs are for comparison: