Creating build environments with Docker

‘How to setup a build environment’ documents are like cooking recipes. Only written in a world where eggs are not always compatible with bowls, teaspoon size changes every week and the cooking time depends on the color of your kitchen walls. In our software world, every recipe needs to be reviewed and updated at least a few times a month; otherwise, you might be chased and eaten by the omelet you tried to prepare (or get a strange build error which is even worse).

In such a hostile world, it is much easier to clone things than to recreate them. That is why we used virtual machines in the old days, and that is why we use Docker today.

This short article will show how to create a primitive build environment that will allow us to compile this simple piece of code (an example is taken from form the Learning Open CV 3 book).

The recipe

Step 1. Install Docker

On a day of writing this text, instructions for Ubuntu Linux can be found here, but tomorrow it might be somewhere else, so google is your friend here. You can try to copy-paste those commands but no guarantee that it will work:

$ sudo apt-get update
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io

You can verify that your Docker engine works by executing:

$ sudo docker run hello-world

You should see something like bla bla bla Hello from Docker! bla bla bla. Or some error message, in which case you need to start googling again. If it works, add yourself to the docker group, so you don’t need to use sudo command every time.

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Step 2. Create Docker config file

Docker configuration files are similar to our “How to…” documents. The Docker builder uses them to produce the image, which is something like a lightweight virtual machine. Our config file (called Dockerfile) will look like this:

FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
apt-get install -y --no-install-recommends\
    gcc \
    g++ \
    cmake \
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

CMD cd /home/out && cmake /home/source/. && make

FROM, ARG, RUN and CMD are the Docker keywords. A full description can be found here, so I will only extract the gist here:

  • we use Ubuntu 18 as our base image
  • setting DEBIAN_FRONTEND to noninteractive saves us from all the questions from the package manager (timezone etc.)
  • we define packages that will be installed inside our image
  • we set a default command to build whatever sits inside the /home/source 

Doing the same with VM would require us to install Ubuntu on virtual hardware, add all needed packages (gcc etc.) and put the build command into some script.

Step 3. Build the image

Building an image is done with a build command. Here I present 2 versions that can be used for our example. The first one will work only if your config file is called Dockerfile; the second accepts the file name as an extra parameter. I also add a tag (ubuntu_gcc) to refer to this image by its name.

$ docker build -t ubuntu_gcc .
$ docker build -t ubuntu_gcc -f name_of_the_file .

Step 4. Run the image

Now we are ready to build our example application:

$ mkdir test && cd test
$ git clone git@github.com:yesiot/first_whale_toast.git fwt
$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out ubuntu_gcc

Done. We can find the binary inside the out directory. There is one important feature that we use above – mounting directories. Our default command looks for the project inside /home/source, and the build folder is /home/out. With the extra -v parameter, we map our host directories into locations inside the container. You can quickly check how it works by running:

$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out -it ubuntu_gcc /bin/bash

Now instead of the default command, we run an interactive bash session. Inspect the home folder and try to create some files inside the source directory. You can verify that created files are visible on your host machine as well.

Step 5. Clouditize

Till now, we just learned a fancy way to keep the project dependencies isolated from the main development machine. The real magic begins when you push your image into the cloud. For this, I created a public dockerhub repository and executed the following commands:

$ docker tag ubuntu_gcc:latest <your docker id>/builder_open_cv:1.0.0
$ docker push <your docker id>/builder_open_cv:1.0.0

The image sits now in the repository and can be used by anyone without a need to rerun the build process.

$ docker run -v$PWD/first_whale_toast:/home/source -v$PWD/out:/home/out <your docker id>/builder_open_cv:1.0.0

Maybe this is not the most impressive example but for the project with a huge number of dependencies (like Yocto) this can allow you to start compiling as soon as the image (Yocto CROPS) is downloaded.