Continuous Delivery with Docker containers

Containers are hot. It seems everybody is using them when building small hobby applications or large enterprise applications distributed across multiple servers. Even though Linux containers are nothing new, Docker have moved containers into the mainstream with easy to use tools, even on non-Linux operation systems.

From VMs to Containers

I recently moved a lot of my infrastructure from using virtual machines to using containers. Now almost every internal or external application is running in a container. I am using Docker on a single Linux server at this moment, but eventually I will move to a container clustering engine like Docker Swarm or Kubernetes.

Finding my perfect CI/CD Match

Before deciding on which application(s) I needed to create a perfect CI/CD pipeline, I wrote down a few requirements:

  • Needed to be container based to fit my new infrastructure setup
  • Wanted to use git as source control system
  • Needed to have a private container registry for commercial projects
  • Preferred an end-to-end CI/CD solution in a single package

There a lots great CI/CD solutions in the market. Microsoft have Visual Studio Team Services, an end-to-end solution which has come a long way since Team Foundation Server. Google has Cloud Source Repository, Container Builder and Container Registry. Both options are great but does not match my requirement to run it on my local infrastructure in Docker containers. I have heard a lot of great things about Jenkins as a automation server, but it does not provide an end-to-end solution. In the end I decided to use GitLab because it matches all of my requirements.

Setting up the containers

GitLab is pretty simple to setup, but because my infrastructure setup involves a reverse proxy in front of my GitLab containers I needed some additional setup parameters. I needed to force the GitLab to listen to HTTP traffic because HTTPS was already terminated on the reverse proxy. This is easily done using environment variables.

To run tests, builds and deployments GitLab needs runners. Runners run in separate containers and multiple runners can be registered to increase the throughput. Runners are registered using tags, which the CI/CD pipeline can trigger when needed. A cool thing about these runners is they actually spawn new containers when executing tasks, giving you all the benefits of containers when running test, builds and deployments.

My GitLab container configuration looks like this:

version: '2'
services:
  gitlab:
    image: gitlab/gitlab-ce:latest
    environment:
      GITLAB_OMNIBUS_CONFIG: | 
        external_url 'https://gitlab.domain.com/'; 
        nginx['listen_port'] = 80; 
        nginx['listen_https'] = false; 
        registry_external_url 'https://registry.domain.com/';
        registry_nginx['listen_port'] = 80; 
        registry_nginx['listen_https'] = false
    volumes:
      ...
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    volumes:
      ...
docker-compose.yml

Developer Productivity vs. Production Environment

I build most of my applications on Windows 10 using Visual Studio Code. Running containers on Windows is dead simple using Docker for Windows.

A key requirement for me is that the development experience needs to be a seamless as possible. Code changes should be reflected instantly to keep the developer productive. Another requirement is to keep the development environment as close to production as possible to be as confident that your code also works in production. Normally these are conflicting requirements, but Docker actually does a great job to fulfill both requirements. I use a docker-compose file for local development and a Dockerfile for building the image for production. In my docker-compose file I mount my files directly in the container, while the Dockerfile copies the files into the container to make it portable.

In this example, my application is a website running on a nginx web server:

version: '2'
services:
  website:
    image: nginx:1.11.9-alpine
    volumes:
      - ./html:/usr/share/nginx/html
    ports:
      - 8081:80
docker-compose.yml
FROM nginx:1.11.9-alpine
COPY /html /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile

Defining the CI/CD pipeline

With GitLab installed and the development environment ready, next step is to define the CI/CD pipeline when code is committed to the master branch. GitLab defines the pipeline with a CI definition file in the project root. It is possible to define multiple stages (test, build, deploy etc.). To complete the deploy stage I needed to ssh into the host OS, pull the newly created image from the private registry, stop the previous container and restart it with the new image. I used sshpass to achieve this.

I defined two stages in my setup:

  • Build
    Builds the docker image and pushes it to my private repository.
  • Deploy
    This stage is actually split in 2 environments: Staging and Production.
    Deploy to the staging is automatic when code is committed. This gives developers, testers and product owners a common test environment with the latest changes.
    Deploy to the production requires manual intervention to accommodate the need for acceptance tests before pushing to production. Any image/version in staging can be lifted to production with a single click on a button.

My CI/CD pipeline looks like this:

stages:
  - build
  - deploy
build:
  image: docker:latest
  services:
    - docker:dind
  variables:
    CONTAINER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  stage: build
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
    - docker build -t $CONTAINER_IMAGE .
    - docker push $CONTAINER_IMAGE
  tags:
    - my-runner
deploy_staging:
  image: gitlab/dind
  services:
    - docker:dind
  variables:
    CONTAINER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    HOST_CONTAINER_NAME: "ci-staging"
  stage: deploy
  script:
    - sudo apt-get update -y && apt-get install sshpass -y
    - sudo chmod +x ./deploy.sh
    - ./deploy.sh $PASS $CI_BUILD_TOKEN $CI_REGISTRY $CONTAINER_IMAGE $HOST_CONTAINER_NAME
  environment:
    name: staging
    url: https://staging.domain.com
  only:
    - master
  tags:
    - my-runner
deploy_production:
  stage: deploy
  ...
  when: manual
.gitlab-ci.yml

Continuous Delivery with Docker containers achieved!

Using the above setup I have achieved my goal of Continuous Delivery from my Windows 10 development machine to my Linux server running Docker engine.

Another bonus with the above setup is how easy it is for me to rollback production if the newest version contains unexpected errors. Every environment is defined in GitLab and all commits to master generates an unique image which is crucial for having a roll-back plan.

So far I am happy with my CI/CD setup using GitLab. I might be tempted to look into Jenkins or the cloud alternatives from Microsoft and Google in the future. Deciding on your CI/CD solution really depends on your requirements, but not having a CI/CD setup is almost a crime with all the great solutions available.