Taking pictures with imx8mp-evk and Basler dart camera

Intro

A new toy arrived: i.MX 8M Plus Evaluation board from NXP with a Basler 5MP camera module. This is going to be fun. Box opened. Cables connected. Power on. LEDs blinking. Time to take first picture.

Step 1. Check device tree

One thing we need for sure is a camera. Hardware is connected but does Linux know about it?

Nowadays, the Linux OS gets information about attached hardware out of a device tree*. This makes the configuration more flexible and does not require recompiling the kernel for every hardware change. NXP board BSP package already contains a device tree file for Basler camera so my only job is to check whatever it is used and enable it if not.

First we will check if the default device tree contains info about my camera. My system is up and running, so I can inspect the device tree by looking around in the sys directory. But first we need to know what are we looking for. If you open Basler device tree source file**, you can see that the camera is attached to the I2C bus:

...
#include "imx8mp-evk.dts"

&i2c2 {
	basler_camera_vvcam@36 {
...

If you now go to imx8mp.dtsi*** you discover that I2C2 is mapped to the address 30a30000.

...
    soc@0 {
	soc@0 {
		compatible = "simple-bus";
		#address-cells = <1>;
		#size-cells = <1>;
		ranges = <0x0 0x0 0x0 0x3e000000>;

		caam_sm: caam-sm@100000 {
			compatible = "fsl,imx6q-caam-sm";
			reg = <0x100000 0x8000>;
		};

		aips1: bus@30000000 {
			compatible = "simple-bus";
			reg = <0x30000000 0x400000>;
                ...
		aips3: bus@30800000 {
			compatible = "simple-bus";
			reg = <0x30800000 0x400000>;
			#address-cells = <1>;
			#size-cells = <1>;
			ranges;

			ecspi1: spi@30820000 {
                        ...
			i2c2: i2c@30a30000 {
				#address-cells = <1>;
				#size-cells = <0>;
				compatible = "fsl,imx8mp-i2c", "fsl,imx21-i2c";
				reg = <0x30a30000 0x10000>;
				interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
				clocks = <&clk IMX8MP_CLK_I2C2_ROOT>;
				

        

Lets check if this node is present on the target.

$ cd /sys/firmware/devicetree/base/soc@0/bus@30800000/i2c@30a30000
$ ls
#address-cells  adv7535@3d       clocks      interrupts              name            pinctrl-0      reg     tcpc@50
#size-cells     clock-frequency  compatible  lvds-to-hdmi-bridge@4c  ov5640_mipi@3c  pinctrl-names  status

Our camera is not listed there so its a device tree we need to fix first.

Step 2. Set device tree

To change device tree we need to jump into u-boot. Just restart the board and press any button when you see:
Hit any key to stop autoboot
Check and change boot-loader settings. First lets see what we have:

$ u-boot=> printenv
baudrate=115200                                                                 
board_name=EVK  
...
fastboot_dev=mmc2                                                               
fdt_addr=0x43000000                                                             
fdt_file=imx8mp-evk.dtb                                                         
fdt_high=0xffffffffffffffff                                                     
fdtcontroladdr=51bf7438                                                         
image=Image  
...
serial#=0b1f300028e99b32                                                        
soc_type=imx8mp                                                                 
splashimage=0x50000000                                                          
                                                                                
Environment size: 2359/4092 bytes  

As expected, u-boot uses the default device tree for our evaluation board. Let’s try to find the one with Basler camera config. I know it should sit in the eMMC, so I will start there.

$ u-boot=> mmc list
FSL_SDHC: 1
FSL_SDHC: 2 (eMMC)

$ u-boot=> mmc part

Partition Map for MMC device 2  --   Partition Type: DOS

Part    Start Sector    Num Sectors     UUID            Type
  1     16384           170392          a5b9776e-01     0c Boot
  2     196608          13812196        a5b9776e-02     83

$ u-boot=> fatls mmc 2:1
 29280768   Image
    56019   imx8mp-ab2.dtb
    61519   imx8mp-ddr4-evk.dtb
    61416   imx8mp-evk-basler-ov5640.dtb
    61432   imx8mp-evk-basler.dtb
    62356   imx8mp-evk-dsp-lpa.dtb
    62286   imx8mp-evk-dsp.dtb
    61466   imx8mp-evk-dual-ov2775.dtb
    61492   imx8mp-evk-ecspi-slave.dtb

We got it! Now its time to set is as a default one. And boot the board again.

$ u-boot=> setenv fdt_file imx8mp-evk-basler.dtb
$ u-boot=> saveenv                              
Saving Environment to MMC... Writing to MMC(2)... OK
$ u-boot=> boot

You can check in the directory we inspected last time that the camera hardware is present in the device tree.

Step 3. Get the image

Finally, we can get our image (a blob of pixels, to be clear). The easy way would be to connect the screen and run one of the NXP demo apps (though you need to flash your board with the full image to get them). But easy solutions are for people that do have their dev boards somewhere in the reach. Mine is running upstairs, and I prefer to do some extra typing than walking there. First, let’s check data formats supported by our camera.

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'YUYV' (YUYV 4:2:2)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [1]: 'NV12' (Y/CbCr 4:2:0)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [2]: 'NV16' (Y/CbCr 4:2:2)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [3]: 'BA12' (12-bit Bayer GRGR/BGBG)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)

We can grab a raw data using following command:

$ v4l2-ctl --set-fmt-video=width=3840,height=2160,pixelformat=YUYV --stream-mmap --stream-count=1 --device /dev/video0 --stream-to=data.raw
<
ls .
data.raw

Copy the raw data file to your development machine and execute this simple python script that will extract the Y component out of the YUV422 image:

# yuv_2_rgb (does not really convert but good enough to check if cam is working)
import sys
from PIL import Image

in_file_name = sys.argv[1]
out_file_name = sys.argv[2]

with open(in_file_name, "rb") as src_file:
    raw_data = src_file.read()
    img = Image.frombuffer("L", (3840, 2160), raw_data[0::2])
    img.save(out_file_name)


# RUN THIS ON YOUR DEV PC/MAC:
$ scp root@YOUR_BOARD_IP:data.raw .
$ python3 yuv_2_rgb.py data.raw data.bmp
$ xdg-open data.bmp

You should see a black and white image of whatever your camera was pointing at.

Step 4. Movie time

Now it is time to get some moving frames. I will use the GStreamer to send the image from the camera to my laptop with 2 simple commands:

# RUN THIS ON YOUR IMX EVALUATION BOARD (replace @YOUR_IP@ with your ip address): 
$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,framerate=30/1,width=320,height=240 ! vpuenc_h264 ! rtph264pay ! udpsink host=@YOUR_IP@ port=5000

# RUN THIS ON YOUR DEV PC/MAC:
$ gst-launch-1.0 udpsrc port=5000 !  application/x-rtp ! rtph264depay ! avdec_h264 ! autovideosink

That’s it. We can see the word through the i.MX eyes/sensors. You can play with the stream settings (image size, frame rate, etc.) or pump the data into some advanced image processing software. Whatever you do, have fun!




* if you are interested in the device trees, there are some great materials from Bootlin

** at the moment of writing the DTS for Basler camera can be found eg. here but since NXP is busy with de-Freescalization I expect it to be moved to some imx folder in the future

*** device trees are constructed in a hierarchical way and for our board imx8mp.dtsi is the top most one


Code monkey detected (in 80%)

The story begins with The Things Network Conference. Signed in. Got an Arduino Portenta board (inclusive camera shield) and attended an inspiring Edge Impulse workshop about building an embedded, machine learning based, elephant detector.

Finding elephants in my room is not a very useful thing. First: I never miss one if it shows up. Second: no elephant has ever shown up in my room. But there is a monkey. It eats one banana in the morning to get energy and one banana in the evening before it goes to bed*. Between eating its bananas, it sits, codes, drinks coffee, codes, makes some exercises and codes even more. Let’s detect this monkey and make a fortune by selling its presence information to big data companies.

Checklist

Four years ago, I half-finished “Machine Learning” online training (knowledge: checked). Once I run an out-of-the-box cat detection model with the Caffe framework (experience: checked). I read the book** (book: checked). I have no clue what I will be doing, but hey, that is how code monkeys do work.

Step one: get Arduino

I use an Arduino board, so I need an Arduino IDE. I follow the steps described here and have a working IDE. Now I probably need to install some extra stuff.

Step two: extra stuff

Following online tutorials, I add Portenta board (Tools->Board->Boards Managment, type “mbed”, install Arduino “mbed-enabled Boards” package). I struggle for some time with the camera example. The output image does not seem to be right. I am stuck for some time trying to figure out why the image height should be 244 instead of 240. The camera code looks like a very beta thing. Maybe I am just using the wrong package version. I switch from 1.3.1 to 1.3.2. Example works.

Next, I add TensorFlow library. Tools->Manage Libraries and type “tensor”. From the list, I install Arduino_TensorFlowLite.

Step three: run “Hello World”

The book starts with an elementary example, which uses machine learning to control PWM cycle of a LED light. This example targets Arduino Nano 33 BLE device, but apparently, it also works on Portena without any code modifications. I select, upload it and stay there watching as the LED changes its brightness.

Step four: detect the monkey

After some time of watching red LED, I am ready to do some actual coding. First I switch to the person detection example (File->Examples->Arduino_TensorFlowLite->person_detection). Then, I modify arduino_image_provider.cpp file, which contains GetImage function, which is used to get the image out of our camera. I throw away all its content and replace it with a modified version of the Portenta CameraCaptureRawBytes example:

#include <mbed.h>
#include "image_provider.h"
#include "camera.h"

const uint32_t cImageWidth = 320;
const uint32_t cImageHeigth = 240;
uint8_t sync[] = {0xAA, 0xBB, 0xCC, 0xDD};

CameraClass cam;
uint8_t buffer[cImageWidth * cImageHeigth];

// Get the camera module ready
void InitCamera(tflite::ErrorReporter* error_reporter) {
  TF_LITE_REPORT_ERROR(error_reporter, "Attempting to start Arducam");
  cam.begin();
}

// Get an image from the camera module
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
                      int image_height, int channels, int8_t* image_data) {
  static bool g_is_camera_initialized = false;
  if (!g_is_camera_initialized) {
    InitCamera(error_reporter);
    g_is_camera_initialized = true;
  }

  cam.grab(buffer);
  Serial.write(sync, sizeof(sync));
  
  auto xOffset = (cImageWidth - image_width) / 2;
  auto yOffset = (cImageHeigth - image_height) / 2;
  
  for(int i = 0; i < image_height; i++) {
    for(int j = 0; j < image_width; j++) {
      image_data[(i * image_width) + j] = buffer[((i + yOffset) * cImageWidth) + (xOffset + j)];
    }
  }
    
  Serial.write(reinterpret_cast<uint8_t*>(image_data), image_width * image_height);

  return kTfLiteOk;
}

And a small change to the main program, so it shouts if the monkey is there.

...
  // Process the inference results.
  int8_t person_score = output->data.uint8[kPersonIndex];
  int8_t no_person_score = output->data.uint8[kNotAPersonIndex];

  if(person_score > 50 && no_person_score < 50) {
    TF_LITE_REPORT_ERROR(error_reporter, "MONKEY DETECTED!\n");
    TF_LITE_REPORT_ERROR(error_reporter, "Score %d %d.", person_score, no_person_score);
  }

I pass the image data to the model, and I send it via a serial port. I will use a simple OpenCV program to display my camera view (which can be useful when testing).

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>

#include <iostream>
#include <thread>

#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>

using namespace cv;

const uint8_t cHeight = 96;
const uint8_t cWidth = 96;
uint8_t cSyncWord[] = {0xAA, 0xBB, 0xCC, 0xDD};

int main(int, char**)
{
    Mat frame, fgMask, back, dst;

    boost::asio::io_service io;
    boost::system::error_code error;

    auto port = boost::asio::serial_port(io);
    port.open("/dev/ttyACM0");
    port.set_option(boost::asio::serial_port_base::baud_rate(115200));

    std::thread t([&io]() {io.run();});

    while(true) {

        uint8_t buffer[cWidth * cHeight];

        uint8_t syncByte = 0;
        uint8_t currentByte;

        while (true) {

            boost::asio::read(port, boost::asio::buffer(&currentByte, 1));
            if (currentByte == cSyncWord[syncByte]) {
                syncByte++;
            } else {
                std::cerr << (char) currentByte;
                syncByte = 0;
            }
            if (syncByte == 4) {
                std::cerr << std::endl;
                break;
            }
        }

        boost::asio::read(port, boost::asio::buffer(buffer, cHeight * cWidth));

        frame = cv::Mat(cHeight, cWidth, CV_8U, buffer);

        if (frame.empty()) {
            std::cerr << "ERROR! blank frame grabbed" << std::endl;
            continue;
        }
        imshow("View", frame);

        if (waitKey(5) >= 0)
            break;
    }
    return 0;
}

I upload the script. Run the app. Point the camera at me. And… we got him! Monkey detected. I think I deserved a banana.

* bananas seem to possess some kind of fruit magic that makes them work according to user (eater) needs. Please google “boost your energy with banana” and “get better sleep with banana” if you want more details

** Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers by Pete Warden and Daniel Situnayake

Using CLion with Docker

The simple way is to follow this tutorial created by someone from the CLion team. There is a big chance you will get all the needed help there. You can also try to read this post where someone who has no clue what he is doing will try to set things up in his build environment. You can also try to follow this someone’s approach and write a short article on how to do things because writing is learning. Or was it something with shearing… never mind. Let’s go.

Assumption no. 1 – you have a CLion installed.

Assumption no. 2 – you have a Docker engine running on your machine.

This is our source file (example from imagemagick lib):

#include <Magick++.h> 
#include <iostream> 

using namespace std; 
using namespace Magick; 

int main(int argc,char **argv) 
{ 
  InitializeMagick(*argv);

  Image image;
  try { 
    image.read( "logo:" );
    image.crop( Geometry(100,100, 100, 100) );
    image.write( "logo.png" ); 
  } 
  catch( Exception &error_ ) 
    { 
      cout << "Caught exception: " << error_.what() << endl; 
      return 1; 
    } 
  return 0; 
}

This is our cmake config file:

cmake_minimum_required(VERSION 3.10)
project(image_cropper)

find_package(ImageMagick REQUIRED COMPONENTS Magick++)

add_executable(app main.cpp)
include_directories(${ImageMagick_INCLUDE_DIRS})
target_link_libraries(app ${ImageMagick_LIBRARIES})

And this is the error we get when we try to run cmake:

CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
  Could NOT find ImageMagick (missing: ImageMagick_Magick++_LIBRARY)

Not good. But instead of polluting our machine with the libmagick++-dev package, we will pollute it with a container (package inside). Containers are much easier to clean up, maintain and send to CI when needed. Our Docker config file looks like this:

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    build-essential \
    gcc \
    g++ \
    cmake \
    libmagick++-dev \
  && apt-get clean

And we can execute following commands to build our image and the application.

$ docker build  -t magic_builder .
$ docker run -v$PWD:/work -it magic_builder /bin/bash
$$ cd work
$$ mkdir build_doker && cd build_doker
$$ cmake .. && make -j 8

So far, so good, but our IDE is sitting in a corner looking sad as we type some shell commands. That is not how it is supposed to be. Come here CLion it is time for you to help us (ears up, tongue out, and it jumps happily into the foreground).

Step 1 Add ssh, rsync and gdb to the image. Also, add an extra user that can be used for opening ssh connection.

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    gdb \
    ssh \
    rsync \
    build-essential \
...

RUN useradd -m user && yes password | passwd user

Step 2 Rebuild and start the container. Check if ssh service is running and start it if not.

$ docker build  -t magic_builder .
$ docker run --cap-add sys_ptrace -p127.0.0.1:2222:22 -it magic_builder /bin/bash
$$ service ssh status
 * sshd is not running
$$ service ssh start

Step 3 Now go to your IDE settings (Ctrl Alt S), section Build, Execution, Deployment, and add Remote Host toolchain. Add new credentials filling user name, password from Docker file, and port from the run command (2222).

If everything goes well, you should have 3 green checkmarks. Now switch to the new toolchain in your cmake profile (you can also add a separate profile if you want). That’s it. You can build, run and debug using your great CLion IDE. Some improvements are still to be done to our Docker image (auto-start of ssh, running in the background), but all of this is already somewhere on the Internet (same as this instruction but who cares).

Color recognition with Raspberry Pi

We know how to build. We know how to run. We even know how to use different architectures. Time to start doing something useful.

Our goal: deploy a simple, containerized application that will display a color seen by the Raspberry Pi’s camera*. Described application and a docker config file can be found here.

Because we target the ARMv6 architecture, I decided to base the docker image on Alpine Linux. It supports many different architectures and is very small in size (5MB for minimal version). Let’s add OpenCV lib and the target application on top of it. Below is Docker config file.

FROM arm32v6/alpine

ARG OPENCV_VERSION=3.1.0

RUN apk add --no-cache \
    linux-headers \
    gcc \
    g++ \
    git \
    make \
    cmake \
    raspberrypi

RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
    unzip ${OPENCV_VERSION}.zip && \
    rm -rf ${OPENCV_VERSION}.zip && \
    mkdir -p opencv-${OPENCV_VERSION}/build && \
    cd opencv-${OPENCV_VERSION}/build && \
    cmake \
    -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D WITH_FFMPEG=NO \
    -D WITH_IPP=NO \
    -D WITH_OPENEXR=NO \
    -D WITH_TBB=YES \
    -D BUILD_EXAMPLES=NO \
    -D BUILD_ANDROID_EXAMPLES=NO \
    -D INSTALL_PYTHON_EXAMPLES=NO \
    -D BUILD_DOCS=NO \
    -D BUILD_opencv_python2=NO \
    -D BUILD_opencv_python3=NO \
    .. && \
    make -j nproc && \
    make install && \
    rm -rf /opencv-${OPENCV_VERSION} 

COPY src/** /app/
RUN mkdir -p /app/build && cd /app/build && cmake .. && \ 
    make && \
    make install && \
    rm /app -rf 

ENTRYPOINT ["/usr/local/bin/opencv_hist"]
CMD ["0"] 

I set my binary as an entry point so it will run when the container is started. I also use CMD to set a default parameter which is a camera index. If not given, 0 will be used. Now we can build and push this image to the Docker hub (or another Docker registry).

$ docker build --rm -t 4point2software/rpi0 .
$ docker push 4point2software/rpi0

Assuming you have a camera attached and configured on our Pi0, you can execute the following lines on the target machine.

$ docker pull 4point2software/rpi0
$ docker run --device /dev/video0 4point2software/rpi0

This should result in an output that will change every time you present a different color to your camera.

A very nice thing about building applications inside containers is that they are completely independent of the target file system. The only requirement is the docker engine – all the rest we ship with our app. Different architecture needed? Just change the base image (eg. arm32v7/alpine). No need for setting up a new tool-chain, cross-compiling, and all related stuff. Think about sharing build environments, CI builds, and you will definitely consider this as something worth trying.

* I will be using a Raspberry Pi 0 with a camera module to get the peak value of the HS histogram

Building Docker image for Raspberry Pi

We like containers and we like Pi computers. Containers running on our Pi’s would be like “double like” and although “double like” does not exist in the real word* running Docker on Pi is still possible.

Good (though little bit outdated) instruction on how to install docker on your board can be found in the first part of this article. Short copy-paste is here:

$ sudo apt-get install apt-transport-https ca-certificates software-properties-common -y
$ curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
$ sudo usermod -aG docker pi
$ sudo reboot
$ sudo systemctl start docker.service

After following described steps you should be able to run docker engine and start / stop containers.

Lets now try to create a sample image. Please note that we do this on our host machine as our experience tells us that its always faster and does not require putting cups with cold water on top of Pi CPU. We search for “raspbian” on Docker hub and we find raspbian/stretch image which should be good to start with. We will add a boost library just to have some extra example layer.

FROM raspbian/stretch
RUN apt-get update && apt-get install -y \
   libboost1.62-all \
   && rm -rf /var/lib/apt/lists/*

We try to build it with:

$ docker build -t rpi_test .

And we get following error:

Sending build context to Docker daemon  2.048kB
 ...
 standard_init_linux.go:211: exec user process caused "exec format error"
 ...

So it is almost working. Just not quite yet. The problem is that our Pi is an arm image (which makes sense as Pi is an arm device). You can check it by running:

$ sudo docker image inspect raspbian/stretch | grep Arch
   "Architecture": "arm",

How to run an arm image on x86 architecture? Use emulator. Quemu was always helping us in such cases so lets make it do its magic for us.

$ sudo apt install qemu-user-static
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

First line installs emulator. The second tells our kernel to open arm (and other architecture) binaries with quemu. You can do this part manually by running. In this case we register only armv7 arch so you would need to repeat this step for every architecture you would like to use (with correct magic strings).

$ sudo sh -c "echo -1 > /proc/sys/fs/binfmt_misc/qemu-arm"
$ sudo sh -c "echo ':qemu-arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:OCF' > /proc/sys/fs/binfmt_misc/register"

Now we can not only build our custom images for Pi but also run them to inspect / adjust before deployment. You can push this image into your docker hub repo and try to pull it from the Raspberry Pi.

$ docker tag rpi_test <docker_id>/rpi_test
$ docker login
$ docker push <docker_id>/rpi_test

And on your Pi.

$ dokcer run -it <docker_id>/rpi_test /bin/bash

Should work :). πŸ‘πŸ‘

* liking something twice results in not liking at all

Docker-compose and user ID

Many articles mention that there is one more way to run a container as a different user: usingΒ docker-compose, a tool that instruments Docker on how to run our containers. In short: instead of command line parameters, we use a structured config file that can look like this:

version: '3'
 services:
  my_service:
     image: my_image 
     command: /bin/bash

We can start our service by running:

$ docker-compose run my_service

Which is equivalent to:

$ docker run -it my_image /bin/bash

Specifying a UID is just a one more line in the config file:

version: '3'
 services:
  my_service:
     image: my_image
     user: $UID:$GID
     command: /bin/bash

Unfortunately, bash does not set GID by default, so we need to do it before running docker-compose. Using the id command inside a config file won’t work as the file is not pre-processed in any way.

$ GID=$(id -g) docker-compose run my_service
$$ id
uid=1000 gid=1000 groups=1000

Conclusion

Docker-compose is a nice tool that wraps docker run command and allows to make a configuration part of the project. It can be useful when creating build environments, automated tests, or complex run configurations.

How to set user when running a Docker container

I the previous post we created a C++ 20 app builder image. It works, but it has one very annoying feature: all created output files are owned by the root user.

$ ls -all
-rw-r--r-- 1 root root 13870 okt 18 20:06 CMakeCache.txt
drwxr-xr-x 5 root root 4096 okt 18 20:06 CMakeFiles
-rw-r--r-- 1 root root 1467 okt 18 20:06 cmake_install.cmake
-rw-r--r-- 1 root root 5123 okt 18 20:06 Makefile
-rwxr-xr-x 1 root root 54752 okt 18 20:06 opencv_hist

The reason is that by default, a docker container is running as root, and all operations inside are executed on its behalf. And yes, you can use a docker to bypass the root restrictions on your host machine (if not running in rootless mode).

$ echo "only root can read me" > secret_file
$ chmod 600 secret_file && sudo chown root:root secret_file
$ cat secret_file
cat: secret_file: Permission denied
$ docker run -v$PWD:/work -it my_image /bin/bash
$$ cat /work/secret_file
only root can read me

We don’t care about security for now – we just want to delete our object files without typing a password all the time.

Specifying user id

A simple trick is to use a Docker run command with a user argument. As you might guess, it allows you to specify the user that will be used when running the container. Interestingly, if you use a numeric ID, the user does not have to exist inside the container. Given UID will be just used in place of root, which allows us to this:

$ docker run --user "$(id -u):$(id -g)" -it my_image /bin/sh
$$ id
uid=1000 gid=1000 groups=1000
whoami
whoami: cannot find name for user ID 1000

As you can see the user 1000 does not really exist. For our simple build example, that is fine, but it can cause troubles for some operations.

Creating user

To create an additional user with a specific UID, we can add those 3 lines into the Dockerfile.

...
RUN addgroup --gid 1000 my_user
RUN adduser --disabled-password --gecos '' --uid 1000 --gid 1000 my_user
USER my_user
...

We disable the password and provide empty GECOS data (full name, phone number, etc.).

We can improve it a little by removing hard-coded ID numbers and using arguments instead.


ARG GROUP_ID
ARG USER_ID
...
RUN addgroup --gid $GROUP_ID my_user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID my_user
USER my_user
...

Now we can pass our UID while building the image.

docker build  --build-arg GROUP_ID=$(id -g) --build-arg USER_ID=$(id -u) -t json_test .

Unfortunately, this means that anyone who wants to use our image will have to build it to make sure that her/his id matches the inside the container (my_user).

Use Docker to compile C++ 20

Last time we created a simple Docker image that allowed building any OpenCV based application. Today we will go one step further and allow our image to compile spaceships.

By spaceship I mean the three-way comparison operator (<=>) – a new feature inside the C++ language (still not official released version 20). To use it we need a gcc 10 so lets add it to our image. We need to add same steps to our recipe that you would normally execute on your dev machine: get dependencies, clone, build and install. I also included the latest cmake version because the default one used by Ubuntu 18 (3.16.0) was not supporting the C++ 20 yet.

FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

# GCC dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libgmp-dev \ 
    libmpfr-dev \
    libmpc-dev \ 
    bash \
    git \
    flex \
    gcc-multilib \
    && rm -rf /var/lib/apt/lists/*

# CMAKE dependencies
RUN apt-get update && apt-get install -y \
    libssl-dev \
    && rm -rf /var/lib/apt/lists/*

# OpenCV Library
RUN apt-get update && apt-get install -y --no-install-recommends\
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

# Install CMAKE
RUN git clone https://github.com/Kitware/CMake.git cmake_repo && \
    cd cmake_repo && \
    git checkout v3.17.3 && \
    ./bootstrap && \
    make && \
    make install && \
    cd .. && \
    rm cmake_repo -r

# Install GCC
RUN git clone git://gcc.gnu.org/git/gcc.git gcc_repo && \
    cd gcc_repo && \
    git checkout releases/gcc-10.1.0 && \
    ./configure --enable-languages=c,c++ --disable-multilib && \
    make && \
    make install && \
    cd .. && \
    rm gcc_repo -r

# Set environment variables
ENV CC=/usr/local/bin/gcc
ENV export CXX=/usr/local/bin/g++

CMD cd /home/out && cmake /home/source/. && make    

After building the image we can run it and compile a special version of our demo application that makes use of the magic operator (branch gcc20).

Nice but we can do better than that. When creating an image we added some dependencies needed to build required libraries. We need them only during the build process and Docker provides nice mechanism to deal with such situations: multi-stage builds.

Using this Docker inside a Docker philosophy we can protect our final image from all unwanted dependencies. All we need to do is split our file into two parts – one that will produce needed artifacts and second that makes use of them.

FROM ubuntu:18.04 AS builder

ARG DEBIAN_FRONTEND=noninteractive

# Build image dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libgmp-dev \ 
    libmpfr-dev \
    libmpc-dev \ 
    bash \
    git \
    flex \
    gcc-multilib \
    libssl-dev \
    checkinstall \
    && rm -rf /var/lib/apt/lists/*

# Build CMAKE
RUN git clone https://github.com/Kitware/CMake.git cmake_repo && \
    cd cmake_repo && \
    git checkout v3.17.3 && \
    mkdir out && \
    ./bootstrap && \
    make && \
    checkinstall -D --pkgname=cmake --pkgversion=3.17.3


# Build GCC
RUN git clone git://gcc.gnu.org/git/gcc.git gcc_repo && \
    cd gcc_repo && \
    git checkout releases/gcc-10.1.0 && \
    mkdir out && \
    ./configure --enable-languages=c,c++ --disable-multilib && \
    make && \
    checkinstall -D --pkgname=gcc --pkgversion=10.1.0


# Target image
FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

# OpenCV Library
RUN apt-get update && apt-get install -y --no-install-recommends\
    libopencv-dev \
    libmpc-dev \
    && rm -rf /var/lib/apt/lists/*

# Copy packages from builder image
COPY --from=builder /cmake_repo/*.deb .
COPY --from=builder /gcc_repo/*.deb .

# Install CMAKE and GCC
RUN apt install ./*.deb && rm *.deb

# Set environment variables
ENV CC=/usr/local/bin/gcc
ENV export CXX=/usr/local/bin/g++

CMD cd /home/out && cmake /home/source/. && make

Here we created 2 packages containing compiled versions of cmake and gcc. We copy and install them when creating production image.

Creating build environments with Docker

‘How to setup a build environment’ documents are like cooking recipes. Only written in a world where eggs are not always compatible with bowls, teaspoon size changes every week and the cooking time depends on the color of your kitchen walls. In our software world, every recipe needs to be reviewed and updated at least a few times a month; otherwise, you might be chased and eaten by the omelet you tried to prepare (or get a strange build error which is even worse).

In such a hostile world, it is much easier to clone things than to recreate them. That is why we used virtual machines in the old days, and that is why we use Docker today.

This short article will show how to create a primitive build environment that will allow us to compile this simple piece of code (an example is taken from form the Learning Open CV 3 book).

The recipe

Step 1. Install Docker

On a day of writing this text, instructions for Ubuntu Linux can be found here, but tomorrow it might be somewhere else, so google is your friend here. You can try to copy-paste those commands but no guarantee that it will work:

$ sudo apt-get update
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io

You can verify that your Docker engine works by executing:

$ sudo docker run hello-world

You should see something like bla bla bla Hello from Docker! bla bla bla. Or some error message, in which case you need to start googling again. If it works, add yourself to the docker group, so you don’t need to use sudo command every time.

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Step 2. Create Docker config file

Docker configuration files are similar to our “How to…” documents. The Docker builder uses them to produce the image, which is something like a lightweight virtual machine. Our config file (called Dockerfile) will look like this:

FROM ubuntu:18.04

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
apt-get install -y --no-install-recommends\
    gcc \
    g++ \
    cmake \
    libopencv-dev \
    && rm -rf /var/lib/apt/lists/*

CMD cd /home/out && cmake /home/source/. && make

FROM, ARG, RUN and CMD are the Docker keywords. A full description can be found here, so I will only extract the gist here:

  • we use Ubuntu 18 as our base image
  • setting DEBIAN_FRONTEND to noninteractive saves us from all the questions from the package manager (timezone etc.)
  • we define packages that will be installed inside our image
  • we set a default command to build whatever sits inside the /home/source 

Doing the same with VM would require us to install Ubuntu on virtual hardware, add all needed packages (gcc etc.) and put the build command into some script.

Step 3. Build the image

Building an image is done with a build command. Here I present 2 versions that can be used for our example. The first one will work only if your config file is called Dockerfile; the second accepts the file name as an extra parameter. I also add a tag (ubuntu_gcc) to refer to this image by its name.

$ docker build -t ubuntu_gcc .
$ docker build -t ubuntu_gcc -f name_of_the_file .

Step 4. Run the image

Now we are ready to build our example application:

$ mkdir test && cd test
$ git clone git@github.com:yesiot/first_whale_toast.git fwt
$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out ubuntu_gcc

Done. We can find the binary inside the out directory. There is one important feature that we use above – mounting directories. Our default command looks for the project inside /home/source, and the build folder is /home/out. With the extra -v parameter, we map our host directories into locations inside the container. You can quickly check how it works by running:

$ docker run --rm -v$PWD/fwt:/home/source -v$PWD/out:/home/out -it ubuntu_gcc /bin/bash

Now instead of the default command, we run an interactive bash session. Inspect the home folder and try to create some files inside the source directory. You can verify that created files are visible on your host machine as well.

Step 5. Clouditize

Till now, we just learned a fancy way to keep the project dependencies isolated from the main development machine. The real magic begins when you push your image into the cloud. For this, I created a public dockerhub repository and executed the following commands:

$ docker tag ubuntu_gcc:latest <your docker id>/builder_open_cv:1.0.0
$ docker push <your docker id>/builder_open_cv:1.0.0

The image sits now in the repository and can be used by anyone without a need to rerun the build process.

$ docker run -v$PWD/first_whale_toast:/home/source -v$PWD/out:/home/out <your docker id>/builder_open_cv:1.0.0

Maybe this is not the most impressive example but for the project with a huge number of dependencies (like Yocto) this can allow you to start compiling as soon as the image (Yocto CROPS) is downloaded.

Toaster. The definition

Toaster. A small, electric appliance that turns bread slices into toasts. As long you call it that way – “an appliance” , everything will stay as it should: you put a slice of bread into the appliance, and after some time, a toast pops out. Brown, hot and ready to be eaten.

You are satisfied.

One day, however, someone will look at the toaster and say: “the thing…”. A magic word that attracts specific kind of businesspeople. They believe that you can turn any data into money. Big data equals big money, so they keep searching for things, which can be used to produce the data. They will stuff your toaster with tens of sensors, connect it to the Internet and turn your simple bread browner into a smart AiToastComposer Plus 100 (first-month subscription for free).

Goodbye warm sandwiches. Welcome “Software update in progress. Toast preparation can take longer than expected..”. Whoever prepared a toast in his life, knows that the word longer combined with a red-hot coil cannot end up well.

You become irritated.

But this is not the end. One evening, eating a cold, fluffy sandwich, you read in the news that the data of all toaster users leaked from cloud servers. Now everyone in the world knows how many toasts you eat, what kind of bread you use and what color socks you wear on Mondays. “How the hell my toaster knows that?”. It knows much more, but you have something else to be worried about right now: Hackers gained backdoor access to ‘make my toast’ application. There might be a hacker hiding inside your toaster! Suddenly all the smart lights go down, the fridge starts shouting something in a foreign language, and the heater decides to turn your apartment into a sauna. Pop! And there comes a toast out.

Now you are terrified.

Welcome into the new reality. You are surrounded by things that collect and sell your private data. Things that can be misused to access your property, create an army of zombie bots, or simply ruin your breakfast. Things that someday may decide that they don’t want to be called things anymore.

The ultimate in paranoia is not when everyone is against you but when everything is against you. Instead of “My boss is plotting against me,” it would be “My boss’s phone is plotting against me.” Objects sometimes seem to possess a will of their own anyhow, to the normal mind; they don’t do what they’re supposed to do, they get in the way, they show an unnatural resistance to change.

Philip K. Dick