Building and Pushing Docker Images with Gitlab

Intro

I’ve finally had time to start using Gitlab for more than just storing old code  I’ve written for my college courses that I have nearly no reason to ever look at again. One of the first issues that I wanted to solve was a CI platform, we use GOCD at work but its lackluster template system and frequent UI changes not to mention the fact its what I already have to deal with at work turns me off from using it for anything home-prod. Gitlab’s native CI integration seems to be the ideal situation and a few companies (Azure DevOps) are following suit. Without further ado let’s get to it then.

Requirements

  • Gitlab server – I use the Omnibus installation which works great
  • Gitlab Runners – I’m using a pair of shared runner docker containers
  • Sonatype Nexus repository for storing artifacts

The Plan

By now I probably have you asking, “why not use the Gitlab internal docker repository for storing images?”. While that is an option I wanted to keep my source control and package repository separate to split the load and Nexus supports other package types I’m using. In this example, we’re going to use the simplest repository I could think of, a container for Gitlab runners to run linting and syntax checking against my Ansible repositories.

The Process

After much trial and error to get docker to talk to a repository with a self-signed certificate authority and then running into the same issue with the runner itself I came up with a working system.

The first thing we’ll need to do is create and configure a new internal docker repository in Nexus and a service account that has permissions to push to it.

  1. Login to your Nexus instance and navigate to settings > security > roles
  2. Create a new role and name it whatever you like I used GitlabCI
  3. Under privileges add nx-docker-view-* (I selected everything except delete permissions)
  4. Under roles assign the role nx-anonymous 
  5. Go to users > create user > new user and pick whatever username you desire then add the role you created in step 2 to its permissions
  6. Set the users password and record it for later
  7. Go to repositories > Create repository > docker internal and select whatever storage you desire then enable an http connector or https connector if you do not plan on putting it behind a reverse proxy
  8. If your Nexus is behind a reverse proxy add the port as an additional virtual server block to your site config for Nexus
# My nginx config file for Sonatype Nexus
server {
    listen 443 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 1G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8081/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}
  server {
    listen   *:80;
    server_name  nexus.clevelandcoding.com;
    return 301 https://$server_name$request_uri;
  }

 server {
    listen 8082 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 2G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8082/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}

 server {
    listen 8083 ssl;
    server_name nexus.clevelandcoding.com;
    ssl_certificate /etc/ssl/certs/mil-nrepo-sp01.pem;
    ssl_certificate_key /etc/ssl/private/mil-nrepo-sp01.key;
    # allow large uploads of files
    client_max_body_size 2G;

    # optimize downloading files larger than 1G
    #proxy_max_temp_file_size 2G;

    location / {
      proxy_pass http://172.16.1.158:8083/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto "https";
    }
}
Step 2: configuring a nx-ci role for our Gitlab user
Step 7: Creating a new internal Docker repository

The next step is to connect our Gitlab runners to the repository with the docker login command.

  1. ssh into one of your docker hosts and run docker login https://nexus.domain.com:$port
  2. Enter the service account username and password when prompted
  3. Get the DOCKER_AUTH_CONFIG content which should be in a directory similar to ~/.docker/config.json which we need for your runner itself to authenticate against the repository when pulling images. NOTE: You do not need to add this variable for this repository but you do for each repository where you pull an image as part of a pipeline
  4. Go to the Gitlab repository that contains the Dockerfile you want to build and navigate to settings > CI > Variables
  5. Add the new variable PASSWORD with the password you set for the service account

Now the runner can push and pull from our Nexus repository but we need to write a pipeline file to tell Gitlab to do that:

  1. Create a new file in your repository called  .gitlab-ci.yml I use vs code for this but you can use pretty much any editor or the web IDE if you prefer.
  2. See below for the file that I use, you will need to replace nexus.clevelandcoding.com:8083 with the uri and port you defined in your Nginx and/ or Nexus config, svcdockeragent with your service account username, and ansible_ci_agent with the name you would like to use for the image.
image: docker:stable

services:
  - name: docker:dind
    command: ["--insecure-registry=nexus.clevelandcoding.com:8083"]

stages:
- build
- test
- release

variables:
  TEST_IMAGE: nexus.clevelandcoding.com:8083/ansible_ci_agent:$CI_COMMIT_REF_NAME
  RELEASE_IMAGE: nexus.clevelandcoding.com:8083/ansible_ci_agent:latest

before_script:
  - docker info
  - docker login -u svcdockeragent -p $PASSWORD nexus.clevelandcoding.com:8083

build:
  stage: build
  script:
    - docker build --pull -t $TEST_IMAGE .
    - docker push $TEST_IMAGE

test:
  stage: test
  script:
    - docker pull $TEST_IMAGE
    - docker run $TEST_IMAGE
release:
  stage: release
  script:
    - docker pull $TEST_IMAGE
    - docker tag $TEST_IMAGE $RELEASE_IMAGE
    - docker push $RELEASE_IMAGE
  only:
    - master

Results and Future Plans

I’ve been using this image I built for two repositories I keep of Ansible playbooks for the past few weeks and it has been working flawlessly. Previously I  was installing Ansible and Ansible-Lint in the before_scripts stage of my Gitlab pipeline which resulted in an average pipeline run of 4.5 minutes, using this pre-built docker image the pipeline now completes in just over 1 minute. I would like to figure out if there’s a better way to authenticate against a Docker repository in a pipeline without specifying –insecure-repository but I haven’t been able to figure out how to inject a self–signed CA certificate into the runner containers. If you have any questions, comments, or suggestions feel free to leave them below or contact me via Discord, thanks for reading!

References