Using CLion with Docker

The simple way is to follow this tutorial created by someone from the CLion team. There is a big chance you will get all the needed help there. You can also try to read this post where someone who has no clue what he is doing will try to set things up in his build environment. You can also try to follow this someone’s approach and write a short article on how to do things because writing is learning. Or was it something with shearing… never mind. Let’s go.

Assumption no. 1 – you have a CLion installed.

Assumption no. 2 – you have a Docker engine running on your machine.

This is our source file (example from imagemagick lib):

#include <Magick++.h> 
#include <iostream> 

using namespace std; 
using namespace Magick; 

int main(int argc,char **argv) 
{ 
  InitializeMagick(*argv);

  Image image;
  try { 
    image.read( "logo:" );
    image.crop( Geometry(100,100, 100, 100) );
    image.write( "logo.png" ); 
  } 
  catch( Exception &error_ ) 
    { 
      cout << "Caught exception: " << error_.what() << endl; 
      return 1; 
    } 
  return 0; 
}

This is our cmake config file:

cmake_minimum_required(VERSION 3.10)
project(image_cropper)

find_package(ImageMagick REQUIRED COMPONENTS Magick++)

add_executable(app main.cpp)
include_directories(${ImageMagick_INCLUDE_DIRS})
target_link_libraries(app ${ImageMagick_LIBRARIES})

And this is the error we get when we try to run cmake:

CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
  Could NOT find ImageMagick (missing: ImageMagick_Magick++_LIBRARY)

Not good. But instead of polluting our machine with the libmagick++-dev package, we will pollute it with a container (package inside). Containers are much easier to clean up, maintain and send to CI when needed. Our Docker config file looks like this:

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    build-essential \
    gcc \
    g++ \
    cmake \
    libmagick++-dev \
  && apt-get clean

And we can execute following commands to build our image and the application.

$ docker build  -t magic_builder .
$ docker run -v$PWD:/work -it magic_builder /bin/bash
$$ cd work
$$ mkdir build_doker && cd build_doker
$$ cmake .. && make -j 8

So far, so good, but our IDE is sitting in a corner looking sad as we type some shell commands. That is not how it is supposed to be. Come here CLion it is time for you to help us (ears up, tongue out, and it jumps happily into the foreground).

Step 1 Add ssh, rsync and gdb to the image. Also, add an extra user that can be used for opening ssh connection.

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    gdb \
    ssh \
    rsync \
    build-essential \
...

RUN useradd -m user && yes password | passwd user

Step 2 Rebuild and start the container. Check if ssh service is running and start it if not.

$ docker build  -t magic_builder .
$ docker run --cap-add sys_ptrace -p127.0.0.1:2222:22 -it magic_builder /bin/bash
$$ service ssh status
 * sshd is not running
$$ service ssh start

Step 3 Now go to your IDE settings (Ctrl Alt S), section Build, Execution, Deployment, and add Remote Host toolchain. Add new credentials filling user name, password from Docker file, and port from the run command (2222).

If everything goes well, you should have 3 green checkmarks. Now switch to the new toolchain in your cmake profile (you can also add a separate profile if you want). That’s it. You can build, run and debug using your great CLion IDE. Some improvements are still to be done to our Docker image (auto-start of ssh, running in the background), but all of this is already somewhere on the Internet (same as this instruction but who cares).

Color recognition with Raspberry Pi

We know how to build. We know how to run. We even know how to use different architectures. Time to start doing something useful.

Our goal: deploy a simple, containerized application that will display a color seen by the Raspberry Pi’s camera*. Described application and a docker config file can be found here.

Because we target the ARMv6 architecture, I decided to base the docker image on Alpine Linux. It supports many different architectures and is very small in size (5MB for minimal version). Let’s add OpenCV lib and the target application on top of it. Below is Docker config file.

FROM arm32v6/alpine

ARG OPENCV_VERSION=3.1.0

RUN apk add --no-cache \
    linux-headers \
    gcc \
    g++ \
    git \
    make \
    cmake \
    raspberrypi

RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
    unzip ${OPENCV_VERSION}.zip && \
    rm -rf ${OPENCV_VERSION}.zip && \
    mkdir -p opencv-${OPENCV_VERSION}/build && \
    cd opencv-${OPENCV_VERSION}/build && \
    cmake \
    -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D WITH_FFMPEG=NO \
    -D WITH_IPP=NO \
    -D WITH_OPENEXR=NO \
    -D WITH_TBB=YES \
    -D BUILD_EXAMPLES=NO \
    -D BUILD_ANDROID_EXAMPLES=NO \
    -D INSTALL_PYTHON_EXAMPLES=NO \
    -D BUILD_DOCS=NO \
    -D BUILD_opencv_python2=NO \
    -D BUILD_opencv_python3=NO \
    .. && \
    make -j nproc && \
    make install && \
    rm -rf /opencv-${OPENCV_VERSION} 

COPY src/** /app/
RUN mkdir -p /app/build && cd /app/build && cmake .. && \ 
    make && \
    make install && \
    rm /app -rf 

ENTRYPOINT ["/usr/local/bin/opencv_hist"]
CMD ["0"] 

I set my binary as an entry point so it will run when the container is started. I also use CMD to set a default parameter which is a camera index. If not given, 0 will be used. Now we can build and push this image to the Docker hub (or another Docker registry).

$ docker build --rm -t 4point2software/rpi0 .
$ docker push 4point2software/rpi0

Assuming you have a camera attached and configured on our Pi0, you can execute the following lines on the target machine.

$ docker pull 4point2software/rpi0
$ docker run --device /dev/video0 4point2software/rpi0

This should result in an output that will change every time you present a different color to your camera.

A very nice thing about building applications inside containers is that they are completely independent of the target file system. The only requirement is the docker engine – all the rest we ship with our app. Different architecture needed? Just change the base image (eg. arm32v7/alpine). No need for setting up a new tool-chain, cross-compiling, and all related stuff. Think about sharing build environments, CI builds, and you will definitely consider this as something worth trying.

* I will be using a Raspberry Pi 0 with a camera module to get the peak value of the HS histogram

Building Docker image for Raspberry Pi

We like containers and we like Pi computers. Containers running on our Pi’s would be like “double like” and although “double like” does not exist in the real word* running Docker on Pi is still possible.

Good (though little bit outdated) instruction on how to install docker on your board can be found in the first part of this article. Short copy-paste is here:

$ sudo apt-get install apt-transport-https ca-certificates software-properties-common -y
$ curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
$ sudo usermod -aG docker pi
$ sudo reboot
$ sudo systemctl start docker.service

After following described steps you should be able to run docker engine and start / stop containers.

Lets now try to create a sample image. Please note that we do this on our host machine as our experience tells us that its always faster and does not require putting cups with cold water on top of Pi CPU. We search for “raspbian” on Docker hub and we find raspbian/stretch image which should be good to start with. We will add a boost library just to have some extra example layer.

FROM raspbian/stretch
RUN apt-get update && apt-get install -y \
   libboost1.62-all \
   && rm -rf /var/lib/apt/lists/*

We try to build it with:

$ docker build -t rpi_test .

And we get following error:

Sending build context to Docker daemon  2.048kB
 ...
 standard_init_linux.go:211: exec user process caused "exec format error"
 ...

So it is almost working. Just not quite yet. The problem is that our Pi is an arm image (which makes sense as Pi is an arm device). You can check it by running:

$ sudo docker image inspect raspbian/stretch | grep Arch
   "Architecture": "arm",

How to run an arm image on x86 architecture? Use emulator. Quemu was always helping us in such cases so lets make it do its magic for us.

$ sudo apt install qemu-user-static
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

First line installs emulator. The second tells our kernel to open arm (and other architecture) binaries with quemu. You can do this part manually by running. In this case we register only armv7 arch so you would need to repeat this step for every architecture you would like to use (with correct magic strings).

$ sudo sh -c "echo -1 > /proc/sys/fs/binfmt_misc/qemu-arm"
$ sudo sh -c "echo ':qemu-arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:OCF' > /proc/sys/fs/binfmt_misc/register"

Now we can not only build our custom images for Pi but also run them to inspect / adjust before deployment. You can push this image into your docker hub repo and try to pull it from the Raspberry Pi.

$ docker tag rpi_test <docker_id>/rpi_test
$ docker login
$ docker push <docker_id>/rpi_test

And on your Pi.

$ dokcer run -it <docker_id>/rpi_test /bin/bash

Should work :). πŸ‘πŸ‘

* liking something twice results in not liking at all

Docker-compose and user ID

Many articles mention that there is one more way to run a container as a different user: usingΒ docker-compose, a tool that instruments Docker on how to run our containers. In short: instead of command line parameters, we use a structured config file that can look like this:

version: '3'
 services:
  my_service:
     image: my_image 
     command: /bin/bash

We can start our service by running:

$ docker-compose run my_service

Which is equivalent to:

$ docker run -it my_image /bin/bash

Specifying a UID is just a one more line in the config file:

version: '3'
 services:
  my_service:
     image: my_image
     user: $UID:$GID
     command: /bin/bash

Unfortunately, bash does not set GID by default, so we need to do it before running docker-compose. Using the id command inside a config file won’t work as the file is not pre-processed in any way.

$ GID=$(id -g) docker-compose run my_service
$$ id
uid=1000 gid=1000 groups=1000

Conclusion

Docker-compose is a nice tool that wraps docker run command and allows to make a configuration part of the project. It can be useful when creating build environments, automated tests, or complex run configurations.

How to set user when running a Docker container

I the previous post we created a C++ 20 app builder image. It works, but it has one very annoying feature: all created output files are owned by the root user.

$ ls -all
-rw-r--r-- 1 root root 13870 okt 18 20:06 CMakeCache.txt
drwxr-xr-x 5 root root 4096 okt 18 20:06 CMakeFiles
-rw-r--r-- 1 root root 1467 okt 18 20:06 cmake_install.cmake
-rw-r--r-- 1 root root 5123 okt 18 20:06 Makefile
-rwxr-xr-x 1 root root 54752 okt 18 20:06 opencv_hist

The reason is that by default, a docker container is running as root, and all operations inside are executed on its behalf. And yes, you can use a docker to bypass the root restrictions on your host machine (if not running in rootless mode).

$ echo "only root can read me" > secret_file
$ chmod 600 secret_file && sudo chown root:root secret_file
$ cat secret_file
cat: secret_file: Permission denied
$ docker run -v$PWD:/work -it my_image /bin/bash
$$ cat /work/secret_file
only root can read me

We don’t care about security for now – we just want to delete our object files without typing a password all the time.

Specifying user id

A simple trick is to use a Docker run command with a user argument. As you might guess, it allows you to specify the user that will be used when running the container. Interestingly, if you use a numeric ID, the user does not have to exist inside the container. Given UID will be just used in place of root, which allows us to this:

$ docker run --user "$(id -u):$(id -g)" -it my_image /bin/sh
$$ id
uid=1000 gid=1000 groups=1000
whoami
whoami: cannot find name for user ID 1000

As you can see the user 1000 does not really exist. For our simple build example, that is fine, but it can cause troubles for some operations.

Creating user

To create an additional user with a specific UID, we can add those 3 lines into the Dockerfile.

...
RUN addgroup --gid 1000 my_user
RUN adduser --disabled-password --gecos '' --uid 1000 --gid 1000 my_user
USER my_user
...

We disable the password and provide empty GECOS data (full name, phone number, etc.).

We can improve it a little by removing hard-coded ID numbers and using arguments instead.


ARG GROUP_ID
ARG USER_ID
...
RUN addgroup --gid $GROUP_ID my_user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID my_user
USER my_user
...

Now we can pass our UID while building the image.

docker build  --build-arg GROUP_ID=$(id -g) --build-arg USER_ID=$(id -u) -t json_test .

Unfortunately, this means that anyone who wants to use our image will have to build it to make sure that her/his id matches the inside the container (my_user).

Use Docker to compile C++ 20

Last time we created a simple Docker image that allowed building any OpenCV based application. Today we will go one step further and allow our image to compile spaceships.

By spaceship I mean the three-way comparison operator (<=>) – a new feature inside the C++ language (still not official released version 20). To use it we need a gcc 10 so lets add it to our image. We need to add same steps to our recipe that you would normally execute on your dev machine: get dependencies, clone, build and install. I also included the latest cmake version because the default one used by Ubuntu 18 (3.16.0) was not supporting the C++ 20 yet.

FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

# GCC dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libgmp-dev \ 
    libmpfr-dev \
    libmpc-dev \ 
    bash \
    git \
    flex \
    gcc-multilib \
    && rm -rf /var/lib/apt/lists/*

# CMAKE dependencies
RUN apt-get update && apt-get install -y \
    libssl-dev \
    && rm -rf /var/lib/apt/lists/*

# OpenCV Library
RUN apt-get update && apt-get install -y --no-install-recommends\
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

# Install CMAKE
RUN git clone https://github.com/Kitware/CMake.git cmake_repo && \
    cd cmake_repo && \
    git checkout v3.17.3 && \
    ./bootstrap && \
    make && \
    make install && \
    cd .. && \
    rm cmake_repo -r

# Install GCC
RUN git clone git://gcc.gnu.org/git/gcc.git gcc_repo && \
    cd gcc_repo && \
    git checkout releases/gcc-10.1.0 && \
    ./configure --enable-languages=c,c++ --disable-multilib && \
    make && \
    make install && \
    cd .. && \
    rm gcc_repo -r

# Set environment variables
ENV CC=/usr/local/bin/gcc
ENV export CXX=/usr/local/bin/g++

CMD cd /home/out && cmake /home/source/. && make    

After building the image we can run it and compile a special version of our demo application that makes use of the magic operator (branch gcc20).

Nice but we can do better than that. When creating an image we added some dependencies needed to build required libraries. We need them only during the build process and Docker provides nice mechanism to deal with such situations: multi-stage builds.

Using this Docker inside a Docker philosophy we can protect our final image from all unwanted dependencies. All we need to do is split our file into two parts – one that will produce needed artifacts and second that makes use of them.

FROM ubuntu:18.04 AS builder

ARG DEBIAN_FRONTEND=noninteractive

# Build image dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libgmp-dev \ 
    libmpfr-dev \
    libmpc-dev \ 
    bash \
    git \
    flex \
    gcc-multilib \
    libssl-dev \
    checkinstall \
    && rm -rf /var/lib/apt/lists/*

# Build CMAKE
RUN git clone https://github.com/Kitware/CMake.git cmake_repo && \
    cd cmake_repo && \
    git checkout v3.17.3 && \
    mkdir out && \
    ./bootstrap && \
    make && \
    checkinstall -D --pkgname=cmake --pkgversion=3.17.3


# Build GCC
RUN git clone git://gcc.gnu.org/git/gcc.git gcc_repo && \
    cd gcc_repo && \
    git checkout releases/gcc-10.1.0 && \
    mkdir out && \
    ./configure --enable-languages=c,c++ --disable-multilib && \
    make && \
    checkinstall -D --pkgname=gcc --pkgversion=10.1.0


# Target image
FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

# OpenCV Library
RUN apt-get update && apt-get install -y --no-install-recommends\
    libopencv-dev \
    libmpc-dev \
    && rm -rf /var/lib/apt/lists/*

# Copy packages from builder image
COPY --from=builder /cmake_repo/*.deb .
COPY --from=builder /gcc_repo/*.deb .

# Install CMAKE and GCC
RUN apt install ./*.deb && rm *.deb

# Set environment variables
ENV CC=/usr/local/bin/gcc
ENV export CXX=/usr/local/bin/g++

CMD cd /home/out && cmake /home/source/. && make

Here we created 2 packages containing compiled versions of cmake and gcc. We copy and install them when creating production image.

Creating build environments with Docker

‘How to setup a build environment’ documents are like cooking recipes. Only written in a world where eggs are not always compatible with bowls, teaspoon size changes every week and the cooking time depends on the color of your kitchen walls. In our software world, every recipe needs to be reviewed and updated at least a few times a month; otherwise, you might be chased and eaten by the omelet you tried to prepare (or get a strange build error which is even worse).

In such a hostile world, it is much easier to clone things than to recreate them. That is why we used virtual machines in the old days, and that is why we use Docker today.

This short article will show how to create a primitive build environment that will allow us to compile this simple piece of code (an example is taken from form the Learning Open CV 3 book).

The recipe

Step 1. Install Docker

On a day of writing this text, instructions for Ubuntu Linux can be found here, but tomorrow it might be somewhere else, so google is your friend here. You can try to copy-paste those commands but no guarantee that it will work:

$ sudo apt-get update
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io

You can verify that your Docker engine works by executing:

$ sudo docker run hello-world

You should see something like bla bla bla Hello from Docker! bla bla bla. Or some error message, in which case you need to start googling again. If it works, add yourself to the docker group, so you don’t need to use sudo command every time.

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Step 2. Create Docker config file

Docker configuration files are similar to our “How to…” documents. The Docker builder uses them to produce the image, which is something like a lightweight virtual machine. Our config file (called Dockerfile) will look like this:

FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
apt-get install -y --no-install-recommends\
    gcc \
    g++ \
    cmake \
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

CMD cd /home/out && cmake /home/source/. && make

FROM, ARG, RUN and CMD are the Docker keywords. A full description can be found here, so I will only extract the gist here:

  • we use Ubuntu 18 as our base image
  • setting DEBIAN_FRONTEND to noninteractive saves us from all the questions from the package manager (timezone etc.)
  • we define packages that will be installed inside our image
  • we set a default command to build whatever sits inside the /home/source 

Doing the same with VM would require us to install Ubuntu on virtual hardware, add all needed packages (gcc etc.) and put the build command into some script.

Step 3. Build the image

Building an image is done with a build command. Here I present 2 versions that can be used for our example. The first one will work only if your config file is called Dockerfile; the second accepts the file name as an extra parameter. I also add a tag (ubuntu_gcc) to refer to this image by its name.

$ docker build -t ubuntu_gcc .
$ docker build -t ubuntu_gcc -f name_of_the_file .

Step 4. Run the image

Now we are ready to build our example application:

$ mkdir test && cd test
$ git clone git@github.com:yesiot/first_whale_toast.git fwt
$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out ubuntu_gcc

Done. We can find the binary inside the out directory. There is one important feature that we use above – mounting directories. Our default command looks for the project inside /home/source, and the build folder is /home/out. With the extra -v parameter, we map our host directories into locations inside the container. You can quickly check how it works by running:

$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out -it ubuntu_gcc /bin/bash

Now instead of the default command, we run an interactive bash session. Inspect the home folder and try to create some files inside the source directory. You can verify that created files are visible on your host machine as well.

Step 5. Clouditize

Till now, we just learned a fancy way to keep the project dependencies isolated from the main development machine. The real magic begins when you push your image into the cloud. For this, I created a public dockerhub repository and executed the following commands:

$ docker tag ubuntu_gcc:latest <your docker id>/builder_open_cv:1.0.0
$ docker push <your docker id>/builder_open_cv:1.0.0

The image sits now in the repository and can be used by anyone without a need to rerun the build process.

$ docker run -v$PWD/first_whale_toast:/home/source -v$PWD/out:/home/out <your docker id>/builder_open_cv:1.0.0

Maybe this is not the most impressive example but for the project with a huge number of dependencies (like Yocto) this can allow you to start compiling as soon as the image (Yocto CROPS) is downloaded.