This article will be interesting for both testers and developers, but it is aimed more at automations who are faced with the problem of configuring GitLab CI / CD for integration testing in conditions of insufficient infrastructure resources and / or lack of a container orchestration platform. I’ll tell you how to configure the deployment of test environments using Docker compose on a single GitLab shell runner so that during deploying multiple environments, the services being launched do not interfere with each other.


In my practice, it often happened to “heal” integration testing on projects. And often the first and most significant problem is the CI pipeline, in which integration testing of the developed service (s) is carried out in a dev / stage environment. This caused quite a few problems:

  • Due to defects in a particular service, the test circuit may be corrupted by broken data during integration testing. There were cases in which sending a request with a broken JSON format damaged a service, which made the stand completely non-working.
  • Slowing down the operation of the test circuit with the growth of test data. I think it makes no sense to describe an example with cleaning / rolling back a database. In my practice, I have not met a project where this procedure would go smoothly.
  • Risk of disruption of the test circuit while testing general system settings. For example, user / group / password / application policy.
  • Test data from autotests is disturbing manual testers.

Someone will say that good autotests should clean the data after themselves. I have arguments against:

  • Dynamic stands are very convenient to use.
  • Not every object can be deleted from the system through the API. For example, a call to delete an object is not implemented, because it contradicts business logic.
  • When creating an object through the API, can be created a huge amount of metadata, which is problematic to delete.
  • If the tests are interdependent, then the data cleaning process after the tests are completed turns into a headache.
  • Additional (and, in my opinion, not always needed) calls to the API.

And the main argument: when the test data begins to clean directly from the database. It turns into a real circus! From the developers you can hear: “I only added / deleted / renamed the table, why did 100500 integration tests fall?”

In my opinion, the most optimal solution is a dynamic environment.

Many people use docker-compose to run a test environment, but few use docker-compose when conducting integration testing in CI / CD. And here I do not take into account kubernetes, swarm and other container orchestration platforms. Not every company has them. It would be nice if docker-compose.yml was universal.

  • Even if we have our own QA runner, how can we make sure that services launched via docker-compose do not interfere with each other?
  • How to collect logs of tested services?
  • How to clean the runner?

I have my own GitLab runner for my projects and I came across these issues while developing a Java client for TestRail. Or rather, when running integration tests. So, we will solve these issues with examples from this project.

Gitlab shell runner

For the runner, I recommend a Linux virtual machine with 4 vCPU, 4 GB RAM, 50 GB HDD.

In the Internet is a lot of information about configuring gitlab-runner, so shortly:

  • We go to the machine on SSH
  • If you have less than 8 GB of RAM, then I recommend making a swap of 10 GB to don’t make the OOM killer come and kill us due to lack of RAM. This can happen when more than 5 tasks are started simultaneously. Tasks will be slower, but stable.
  • Install gitlab-runner, docker, docker-compose, make.
  • Add user gitlab-runner to the docker group

  • Register gitlab-runner.
  • Open for editing /etc/gitlab-runner/config.toml and add

This will allow you to run parallel tasks on the same runner.

If your machine is more powerful, for example, 8 vCPU, 16 GB RAM, then these numbers can be at least 2 times larger. But it all depends on what exactly will be launched on this runner and in what quantity.

It’s enough.

Preparing of docker-compose.yml

The main task is docker-compose.yml, which will be used both locally and in the CI pipeline.

The variable COMPOSE_PROJECT_NAME will be used to start several instances of the environment.

An example of my docker-compose.yml:


Makefile Preparation

I use Makefile, as it is very convenient both for local management of the environment, and in CI.

Preparing of .gitlab-ci.yml

Run of integration tests

As a result of launching such a task in the artifacts, the logs directory will contain the logs of services and tests. Which is very convenient in case of errors. Each of my tests in parallel writes its own log, but I will talk about this separately.

Runner Cleaning

The task will be launched only on schedule.

Next, go to our GitLab project -> CI / CD -> Schedules -> New Schedule and add a new schedule.


Run 4 tasks in GitLab CI.

In the logs of the last task with integration tests, we see containers from different tasks.

It seems everything is beautiful, but there is a nuance. Pipeline can be forced to cancel during the integration tests, in which case running containers will not be stopped. From time to time you need to clean the runner. Unfortunately, the revision task in GitLab CE is still in Open status.

But we have added the scheduled task launch, and no one forbids us to start it manually.

Go to our project -> CI / CD -> Schedules and run the Clean Runner task:


  • We have one shell runner.
  • There are no conflicts between tasks and the environment.
  • We have a parallel launch of tasks with integration tests.
  • You can run integration tests both locally and in the container.
  • Service and test logs are collected and attached to the pipeline task.
  • It is possible to clean the runner from old docker-images.

Setup time is ~ 2 hours.

We will be glad to get you feedback, for this don’t hesitate to contact us.