First Steps with Gitlab CI and Python

Gitlab CI

After I set up this website using a Gitlab pipeline, I tried to set up a continuous integration pipeline for one of my Python projects. I found multiple contradicting tutorials on the matter, most of which appear to be out of date. As it turns out, setting up a CI pipeline takes virtually no work. I created a dummy repository as a minimal working example. All you need to do is add two files to the root folder of your repository.

  • .gitlab-ci.yml which describes what to do.
  • requirements.txt which lists the dependencies of the project that are needed for testing.

Setup

Assume we have a repository with the following structure

gitlab-ci-demo/
│
├── .gitlab-ci.yml
├── requirements.txt
│
├── project/
│   └── do_stuff.py
│
└── tests/
    └── test_stuff.py

The gitlab-ci file describes one ore more test work flows. My example file looks like this:

py36_nose:
  image: python:3.6
  script:
    - apt-get update -q -y
    - pip install -r requirements.txt
    - nosetests -v --nocapture

It defines one work flow named py36_nose which installs the Python packages listed in the requirements file and then runs nosetests. The image parameter specifies a docker image (from docker hub) that is used by the docker executor. In this case I use the official Python image with Python version 3.6.

The requirements files is a list of packages required for the execution and testing of the project. Since I only use numpy in the project and nose for testing is is relatively slim:

nose
numpy

And that’s it. Gitlab automagically notices the .gitlab-ci.yaml and runs the test pipelines. Now I get an email every time I push changes that break on of the tests.

Whats next?

I usually use conda to handle my Python environment. I would guess that there is a way to replace the requirements.txt with an environment.yaml file and do not change all that much in this example. However, I have not tried this yet.

comments powered by Disqus