Using local registry with VSTS pipeline

The problem

You are using Microsoft Azure DevOps. You want to create a CI pipeline that uses Docker for building your great piece of software. Simple. Just create a container registry… You have no access rights. Ask manager. He has no idea how to use Azure Shell Cloud. And also has no access rights. Ask IT. They tell you to wait. Waiting takes time and if you don’t have it here is a quick solution.

The solution

It is not perfect as you need to use your own VSTS agent but setting up one is not so hard and probably you or someone in our team has enough access rights to do it. To have a local docker registry just spin a registry container:

docker run -d -p 5000:5000 --name registry registry:2

Now you can use it in your VSTS pipeline like this:

# triggers parameters etc.
 
pool: '<your_agent_pool>'
variables:
  imageName: 'localhost:5000/my_image'
jobs:
  - job: prepare
    steps:
      - bash: |
          echo "Build docker image"
          export DOCKER_BUILDKIT=1
          docker build --ssh default -t $(imageName) -f docker/Dockerfile .
          docker push $(imageName)
  - job: build
    dependsOn: prepare
    container:
      image: $(imageName)
    steps:
        - steps to build your application

# rest of the pipeline

The created image will be pushed to the local registry and pulled in the build stage. Now you can enjoy your pipeline while your IT department is working hard to process your ticket (at the moment of writing this article I am waiting second month already). Good luck. Stay strong. And switch to GitLab.

Journey with Rust Part 3: small dive into attribute macros

Important note: If you are looking for a comprehensive guide into Rust macros, please keep on searching – this one is just a quick glimpse at what sits under the hood of the #[] syntax. One who wrote it has no real experience or knowledge. All he has is his keyboard, google search engine and his faith that one day he will reach the zen state of coding.

The goal

Today’s goal: to create a macro that will reverse the name of any function (yes it is possible!) and inject some extra code into its body. In short: make the following code compile.

#[reverse_name(test)]
fn rust_is_fun() {
    println!("Called by function");
}

fn main() {
    nuf_si_tsur();
}

The solution

The code presented below does exactly what we need. The whole project can be found here.

use syn;
use proc_macro::TokenStream;
use proc_macro2::Span;
use quote::quote;


#[proc_macro_attribute]
pub fn reverse_name(attr: TokenStream, item: TokenStream) -> TokenStream {

    // turn TokenStream into a syntax tree
    let func = syn::parse_macro_input!(item as syn::ItemFn);

    // extract fields out of the item
    let syn::ItemFn {
        attrs,
        vis,
        mut sig,    // mutable as we are going to change the signature
        block,
    } = func;

    let name = (format!("{}", sig.ident)).chars().rev().collect::<String>();
    sig.ident = syn::Ident::new(&name, Span::call_site());

    let item_str = attr.to_string();

    let output = quote! {
        #(#attrs)*
        #vis #sig {
            println!("Injected: {}", #item_str);
            #block
        }
    };

    // See the body of our new function (printed during build)
    println!("New function:\n{}", output.to_string());

    // Convert the output from a `proc_macro2::TokenStream` to a `proc_macro::TokenStream`
    TokenStream::from(output)
}

Only a few copy-paste actions, some glue code here and there, and done. But what exactly have I done?

WTH have I done?

Not knowing why the code does not work is a bad thing, but not knowing why the code does work is even worse. Let’s try to figure out what exactly happened above.

We added some extra print function and while building the project we can see its output:

New function:
fn nuf_si_tsur()
{
    println! ("Injected: {}", "test") ; { println! ("Called by function") ; }
    println! ("Again injected: {}", "test") ;
}

So our compiler took the source code, found part marked with the reverse_name attribute, and fed it into our function replacing the original code with its output. In theory, we can manipulate the code in any crazy way we want (although I guess that black magic macros in Rust are just as bad as in C).

Q&A

Some questions arose when writing the code so it’s time to search for the answers.

1. Why do we need a separate proc-macro crate for macros?

As we saw our macro code was used to manipulate the code while performing the build. It means, that the functions need to be available to the compiler it starts its work. And since functions are written in Rust they must be available as binaries so we need to compile them in a separate module. Also, note that when doing a cross-compilation (eg. for ARM microcontroller) the macro code always needs to compile for your development, and not the target, machine. Another reason to keep it separated.

2. Why proc_macro and proc_macro2?

The proc_macro crate is the library that makes all the macro magic work. Proc_macro2 is “A wrapper around the procedural macro API of the compiler’s proc_macro crate.” This part is confusing but it looks like the proc_macro can’t be used by eg. syn crate and we need yet another crare redefining the same types (like Ident or Span). Something that might change in the future I guess but for now, we need both.

3. What is syn and quote?

Functions inside syn crate translate TokenStream into a syntax tree that represents any code construction present in the Rust language. In our example, the ItemFn structure holds all the parts that can be present in a free-standing function (parameters, name, body, etc.) Quote does the opposite – it translates syntax tree back into a token stream. It has a very interesting feature that allows writing a string that looks very similar to a code. Makes things more readable.

4. Can I debug a macro translated function

No. At least not without some extra effort. In theory, you could print (as we did in our example), copy-paste, and debug any function created by the macro engine. Another option would be or use a tool like cargo-expand that recursively expands all the macros used in the code.

Summary

Rust macros are a very powerful, and yet easy-to-use feature. I was using Python to generate C++ and C code for a long time but Rust sets new standards when it comes to code generation.

Journey with Rust Part 2: Unit testing

I won my first battles with Rust compiler, and now I have a code that is failproof. But having software that does not crash is not enough. I need to make sure, that my application does what it is supposed to do. Even more important, I need a guarantee that its behavior won’t change in the future, after all the refactoring, bug fixing, and adding new features. Also I need a magic force to drive my architecture (so I can add yet another cool abbreviation in my CV). What I need is Unit Testing.

Time for another adventure. Let the google search engine guide us on our journey.

Simple test

Our first goal is to test this piece of code (full source can be found here).

pub fn find_marmot(depth: u32) -> bool {
    depth > 20
}

A very nice surprise is that Rust already comes with support for unit testing so there is no need to install any external crate. Testing is as simple as adding few extra lines of code into the source file…

#[cfg(test)]
mod test {
    use super::*;

    #[test]
    fn when_search_not_deep_then_no_marmot_found() {
        assert_eq!(find_marmot(19), false);
    }
}

…and running the test target.

$cargo test
   Compiling marmot_test v0.1.0
    Finished test
     Running unittests

running 1 test
test marmot_hole::test::when_search_not_deep_then_no_marmot_found ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Super easy but also super primitive*. It is not something that you could compare with a proper unit test framework like Google Test or Catch2. It looks more like a small addon that allows you to run multiple small programs and count the number of panics. No extra features like test fixtures, mocks or parameterized tests.

Less simple test

This is our following function to test:

pub fn chance_to_find_marmot(day_of_weeek: u8) -> f32 {
    match day_of_weeek {
        1 => 1.0,
        2 | 3 | 4 => 1.0 / 2.0,
        5 | 6 | 7 => 1.0 / 3.0,
        _ => panic!("Not a day of week")
    }
}

While I can test for panic with a should_panic attribute, there is no check dedicated for floating point results that would not suffer from floating point inaccuracy (see EXPECT_FLOAT_EQ from GTest). To add this functionality, we need to install assert_approx_eq crate and use it like this:

#[test]
#[should_panic]
fn when_invalid_week_day_should_panic() {
    chance_to_find_marmot(9);
}

#[test]
fn when_friday_than_0_33_chance_of_fiding_marmot() {
    let res = chance_to_find_marmot(5);
    assert_approx_eq!(res, 0.333, 0.01)
}

Parameterized tests

Searching for a way to do parameterized tests I encounter rstest crate. So it looks that I will have to install yet another external framework. With rstest installed I can write my test like this:

#[rstest]
#[case(1, 1.0)]
#[case(2, 0.5)]
#[case(3, 0.5)]
#[case(4, 0.5)]
#[case(5, 0.33)]
#[case(6, 0.33)]
#[case(7, 0.33)]
fn check_all_days(#[case] input: u8,#[case] expected: f32) {
assert_approx_eq!(expected, chance_to_find_marmot(input), 0.01)
}

It also supports test fixtures but does not offer anything more than that.

Mocking framework

Now we start moving even more uphill. There are so many different mocking frameworks for Rust that it is really hard to judge which one is the best. Luckily for us, someone has walked this path before and left this useful overview. Mockall has the greatest number of features and quick research reveals that it is the only framework that is still actively developed. So the choice is not so hard after all.

Let’s add a simple trait and see how we can mock it.

pub trait HidingPlace {
    fn has_marmot(&self) -> bool;
}

pub fn find_marmot_in(hiding_place: &dyn HidingPlace) -> bool {
    return hiding_place.has_marmot();
}

Following documentation I just do this:

use mockall::*;
use mockall::predicate::*;

#[automock]
pub trait HidingPlace {
    fn has_marmot(&self) -> bool;
}

And it works. I can now use MockHidingPlace structure in my tests. However, I don’t feel comfortable with polluting my code with test specific statements, so I will try to move it into the dedicated module. More doc reading and I found a way to do this:

#[cfg(test)]
mod test {
    use super::*;
    use mockall::*;
    use mockall::predicate::*;

    mock! {
        pub Hole {}
        impl HidingPlace for Hole{
            fn has_marmot(&self) -> bool;
        }
    }

    #[test]
    fn when_search_deep_then_marmot_found() {
        let mut mock = MockHole::new();
        mock.expect_has_marmot()
            .times(1)
            .returning(||true);
        assert_eq!(find_marmot(21, &mock), true);
    }
}

Summary

Setting up a unit test framework in Rust means adding specialized crates into your project. Each one brings single piece of functionality, like mocking or float asserts, and together they create a full testing environment. Not as perfect as solutions we know from C or C++ but good enough to test our code and move to the next chapter of our journey.



* In this blog post, Mozilla guys suggest using the standard test tool from Rust so maybe I just underestimate its potential

Journey with Rust Part 1: Errors everywhere

Working with Rust is an exciting adventure. One that starts quite grim: an evil compiler tries to cut your head off every time you write a single line of code. But as you move forward and obey few simple rules, things become better and less painful. Eventually, you master the language and reach a secret world of software perfection and beauty. And once you are there, you never want to go back…

That is what the Internet once told me, so I took my keyboard and my screen and started the journey on my own. First miles of code and I get attacked by errors from every possible direction.

Compilation errors

There is nothing more frustrating than being told what to do: teachers, parents, my wife, my boss… almost everyone. And now my compiler as well. “Consider removing this semicolon”. So I do. “Consider borrowing here”. So I do. Line after line just applying fixes until it finally shuts up and lets my program run.

Try to do this:

let mut array: [i32; 2] = [1, 2];
array[2] = 3;

and you get this:

error: this operation will panic at runtime

array[2] = 3;
^^^^^^^^ index out of bounds: the length is 2 but the index is 2

Try this:

let mut ARRAY: [i32; 2] = [1, 2];
ARRAY[1] = 3;

and you get this:

warning: variable `ARRAY` should have a snake case name

let mut ARRAY: [i32; 2] = [1, 2];
        ^^^^^ help: convert the identifier to snake case: `array`

Working with Rust compiler is an exceptional experience. You quickly start to understand, that the little gnome behind the screen* does not trust you at all. It checks every step you take, making sure you wont go off the beaten track. Very frustrating if you ever worked with C++ but this behavior was not added just to make your life miserable. There are some great benefits of working by the rules.

Memory safety (more compilation errors)

The biggest advantage of working with Rust is the memory safety. Forget about nullptr dereferencing, uninitialized variables, deadlocks, race conditions, and all those things that make software engineering such an “interesting” discipline. No more UFO bug tickets starting with “seen only once”. No more week-long investigations, no more running the same code for 1000th time, hoping that it will finally crash. Guess what will happen if I try to do this:

let i : i32;
println!("Here is some garbage: {}", i);

Exactly. The compiler gets angry and throws red errors at me. And it won’t stop until I agree to start coding as it wants (see point 1) and initialize variables before use. Let’s do something even more crazy and try spin multiple threads:

use std::thread;

fn main() {

    static mut NOT_PROTECED : i32 = 0;

    thread::spawn(|| {
        for _ in 0..10 {
            NOT_PROTECED += 1;
        }
    }).join().unwrap();

    println!("Val: {}", NOT_PROTECED);
}

Now the real battle begins. For every single fix there are two new errors. After one hour of pasting random stuff from Stack Overflow the compiler finally shuts up.

use std::thread;
use std::sync::atomic::{AtomicI32, Ordering};
use std::sync::Arc;

fn main() {

    let protected = Arc::new(AtomicI32::new(0));
    let result = Arc::clone(&protected);
    thread::spawn(move || {
        for _ in 0..10 {
            let val = protected.load(Ordering::Relaxed);
            protected.store(val + 1, Ordering::Relaxed);
        }
    }).join().unwrap();

    println!("Val: {}", result.load(Ordering::Relaxed));
}

Our gnome is maybe grumpy but it just saved us from some concurrency issues and, what is more important: possible future problems that would definitely arise as the software gets older and bigger.

Some magic

This is a good place to introduce a powerful magic spell: an unsafe keyword which tells the compiler: “trust me I know what I am doing”. It enables shooting in your own foot mode which we know so well form C++. No limits, no errors. This will compile just fine.

fn main() {

    unsafe {
        static mut NOT_PROTECED: i32 = 0;

        thread::spawn(|| {
            for _ in 0..10 {
                NOT_PROTECED += 1;
            }
        }).join().unwrap();

        println!("Val: {}", NOT_PROTECED);
    }
}

Very tempting, but writing your program inside a big unsafe clause is not considered the best Rust practice. This word should be used with external, well tested libraries – things that we know won’t break and won’t introduce an undefined behavior. In short – code that is not written by us.

Error handling (back to compilation errors)

Do you want to be famous? Of course you do. And there is no better way to become famous than to deadlock a space orbiter or blow up a nuclear power plant.

We already know that Rust won’t allow us to inject any nasty race condition but how about reading from a non existing file?

use std::fs::File;
use std::io::prelude::*;

fn main() {
    let mut new_command = String::new();
    let mut file = File::open("important_space_orbiter_instructions.txt");
    file.read_to_string(&mut new_command);
    println!("Space station please do: {}", new_command);
}

This wont fly. And I mean the code not the space station. The return type of file create is a Result. Result can be something (file handler in our case) or it can be an error (if the operation failed). And the best part is: you can’t ignore the error condition (guess who will complain if you do). A simplest way to make our code compile, is to panic in the case of failure. The expect keyword below will make our program terminate if the open won’t succeed.

fn main() {
    let mut new_command = String::new();
    let mut file = File::open("important_space_orbiter_instructions.txt").expect("Can't read file");
    if file.read_to_string(&mut new_command).is_ok() {
        println!("Space station please do: {}", new_command)
    }
}

But the real power of Result type comes with the “?” operator, which simply means – in case of error return error.

use std::fs::File;
use std::io::prelude::*;

fn read_command() -> std::io::Result<String> {
    let mut new_command = String::new();
    let mut file = File::open("important_space_orbiter_instructions.txt")?;
    file.read_to_string(&mut new_command)?;
    Ok(new_command)
}

fn main() {
    let command = read_command().expect("Can't get command")
    println!("Space station please do: {}", command);
}

This way the error handling logic does not pollute the normal execution path. Everything is nice, clean and safe. Thank you little gnome.

Summary

When working with Rust, you can get a feeling that the compiler is working against you and you need to fight it every time you want to have something done. But think about it this way: when going on a war, who would you prefer to be on your side? A grumpy guy who will point out every little mistake you make or a silent fellow that tells nothing even when you hold your rifle by the wrong end.

Last word

One important note to know when you start programming with Rust: assignment is a move operation.

*Compiler Gnomes – small creatures living On The Hardware Side. With their tiny binary axes they chop human readable text into pieces that are used to prepare a binary soup**

**Binary soup – soup that tells your computer what to do. Tastes like oranges

How NOT to use mocks in googletest framework and why singletons are bad

Lets make a game*. It is about marmot – a great beast that can do one thing only: it can sleep. Marmot’s state depends on the state of his universe: in the winter he sleeps. And during the night he sleeps. And if he not sleeps, he is awake. Our Universe interface will look like this:

class Universe
{
public:
    virtual ~Universe() = default;

    virtual bool IsDay() const = 0;
    virtual bool IsWinter() const = 0;
};

Now comes a brilliant idea: let’s make our universe a singleton – in the end there should be only one and it might have some internal state, that should be the same for all the clients. Following factory will guarantee the singletoness and will allow to add more complex universes in the future:

Universe& UniverseFactory::getInstance()
{
	static RandomUniverse singleton;
	return singleton;
}

Marmot can now use this factory to get a Universe and figure out if he sleeps or not.

MarmotState Marmot::getState() const
{
    if(UniverseFactory::getInstance().IsWinter()) {
        return MarmotState::MARMOT_IS_SLEEPING;
    }

    if(UniverseFactory::getInstance().IsDay()) {
        return MarmotState::MARMOT_IS_AWAKE;
    }

    // Oh oh - we made a mistake here!
    return MarmotState::MARMOT_IS_AWAKE;
}

Of course everything is unit tested and all the tests are green so lets “ship it!“. Next day the support phone line gets red hot as hundreds of people are calling, asking why their marmots are not sleeping in the middle of the night. What happened here? Let’s look at the tests.

static UniverseMock universe;

Universe& UniverseFactory::getInstance()
{
	return universe;
}

class SomeClassTest: public ::testing::Test {
public:

	void SetUp() {}

	void TearDown() {}
};

TEST_F(MarmotTest, When_Not_Winter_And_Day_Should_Be_Awake) {
    Marmot marmot;

    EXPECT_CALL(universe, IsWinter()).WillOnce(Return(false));
    EXPECT_CALL(universe, IsDay()).WillOnce(Return(true));
    ASSERT_EQ(MarmotState::MARMOT_IS_AWAKE, marmot.getState());
}

TEST_F(MarmotTest, When_Winter_Should_Sleep) {
    Marmot marmot;

    EXPECT_CALL(universe, IsWinter()).WillRepeatedly(Return(true)); // oh

    EXPECT_CALL(universe, IsDay()).WillOnce(Return(true));
    ASSERT_EQ(MarmotState::MARMOT_IS_SLEEPING, marmot.getState());

    EXPECT_CALL(universe, IsDay()).WillOnce(Return(false));
    ASSERT_EQ(MarmotState::MARMOT_IS_SLEEPING, marmot.getState());
}

TEST_F(MarmotTest, When_Night_Should_Sleep) {
    Marmot marmot;

    EXPECT_CALL(universe, IsDay()).WillOnce(Return(false));
    ASSERT_EQ(MarmotState::MARMOT_IS_SLEEPING, marmot.getState());
}

The last test should fail but it does not because we’ve made two terrible things: the universe mock is a static variable and there is no expectation on IsWinter in the last test. The first issue is the real problem here. The mock object does hold its state until destroyed, which means that in our case the expectations we set in the previous tests, do propagate to the next ones (if not overwritten). So the IsWinter expectation from When_Winter_Should_Sleep is also used in the When_Night_Should_Sleep test. Fine. Let’s just create and destroy the mock for every test run.

static UniverseMock* universe;

Universe& UniverseFactory::getInstance()
{
    return *universe;
}

class MarmotTest: public ::testing::Test {
public:

    void SetUp() {
        universe = new UniverseMock;
    }

    void TearDown() {
        delete universe;
    }
};

And this will work until one day somebody does this inside our marmot:

auto& universe = UniverseFactory::getInstance();

MarmotState Marmot::getState() const
{
if(universe.IsWinter()) {
return MarmotState::MARMOT_IS_SLEEPING;
}

if(universe.IsDay()) {
return MarmotState::MARMOT_IS_AWAKE;
}

// Oh oh - we made a mistake here!
return MarmotState::MARMOT_IS_AWAKE;
}

Now we get a segfault instead of test results. Which brings us to the main conclusion of this post: singletons (at least ones as implemented here) are bad and we should do some serious thinking before we decide to use them. And if we decide to use singleton we should do some more thinking to make sure that our code will be easy testable. Does it mean that we need to redesign our marmot game? Not necessarily. We can keep this bad code untouched with a help of magic function that verifies and clears expectations. It’s called (surprise, surprise): VerifyAndClearExpectations and we can just call it after every test run like this:

class MarmotTest: public ::testing::Test {
public:

    void SetUp() {}

    void TearDown() {
        Mock::VerifyAndClearExpectations(&universe);
    }
};

Now tests go red and we can clearly see the root-causes – one because of wrong expectations and second because of the bug. Fix it, ship it, and go for a… not yet! Add “refactor marmot universe” ticket to the technical backlog and now go for a beer (or to sleep if you are a marmot and its winter or a night).

*All source code can be found here

YUV422 to RGB conversion using Python and OpenCV

In the previous post, we were garbing a RAW image from our camera using the v4l2-ctl tool. We were using a primitive Python script that allowed us to look at the Y channel. I decided to create yet another primitive script that enables the conversion of our YUV422 image into an RGB one.

import os
import sys
import cv2
import numpy as np

input_name = sys.argv[1]
output_name = sys.argv[2]
img_width = int(sys.argv[3])
img_height = int(sys.argv[4])


with open(input_name, "rb") as src_file:
    raw_data = np.fromfile(src_file, dtype=np.uint8, count=img_width*img_height*2)
    im = raw_data.reshape(img_height, img_width, 2)

    rgb = cv2.cvtColor(im, cv2.COLOR_YUV2BGR_YUYV)

    if output_name != 'cv':
        cv2.imwrite(output_name, rgb)
    else:
        cv2.imshow('', rgb)
        cv2.waitKey(0)

The OpenCV library does all image processing, so the only thing we need to do is to make sure we are passing the input data in the correct format. In the case of YUV422 we need to use an image with two channels: Y and UV.

Usage examples:

python3 yuv_2_rgb.py data.raw cv 3840 2160
python3 yuv_2_rgb.py data.raw out.png 3840 2160

Test with a webcam

You can check if your laptop camera supports YUYV by running the following command:

$ v4l2-ctl --list-formats-ext

If yes run following commands to get the RGB image. Please note that you might need to use different resolutions.

$ v4l2-ctl --set-fmt-video=width=640,height=480,pixelformat=YUYV --stream-mmap --stream-count=1 --device /dev/video0 --stream-to=data.raw
$ python3 yuv_2_rgb.py data.raw out.png 640 480

Test on the target

Time to convert the image coming from the i.MX8MP development board.

$ ssh root@imxdev
$$ v4l2-ctl --set-fmt-video=width=3840,height=2160,pixelformat=YUYV --stream-mmap --stream-count=1 --device /dev/video0 --stream-to=data.raw
$$ exit
$ scp root@imxdev:data.raw .
$ python3 yuv_2_rgb.py data.raw out.png 3840 2160
$ xdg-open out.png

Taking pictures with imx8mp-evk and Basler dart camera

Intro

A new toy arrived: i.MX 8M Plus Evaluation board from NXP with a Basler 5MP camera module. This is going to be fun. Box opened. Cables connected. Power on. LEDs blinking. Time to take first picture.

Step 1. Check device tree

One thing we need for sure is a camera. Hardware is connected but does Linux know about it?

Nowadays, the Linux OS gets information about attached hardware out of a device tree*. This makes the configuration more flexible and does not require recompiling the kernel for every hardware change. NXP board BSP package already contains a device tree file for Basler camera so my only job is to check whatever it is used and enable it if not.

First we will check if the default device tree contains info about my camera. My system is up and running, so I can inspect the device tree by looking around in the sys directory. But first we need to know what are we looking for. If you open Basler device tree source file**, you can see that the camera is attached to the I2C bus:

...
#include "imx8mp-evk.dts"

&i2c2 {
	basler_camera_vvcam@36 {
...

If you now go to imx8mp.dtsi*** you discover that I2C2 is mapped to the address 30a30000.

...
    soc@0 {
	soc@0 {
		compatible = "simple-bus";
		#address-cells = <1>;
		#size-cells = <1>;
		ranges = <0x0 0x0 0x0 0x3e000000>;

		caam_sm: caam-sm@100000 {
			compatible = "fsl,imx6q-caam-sm";
			reg = <0x100000 0x8000>;
		};

		aips1: bus@30000000 {
			compatible = "simple-bus";
			reg = <0x30000000 0x400000>;
                ...
		aips3: bus@30800000 {
			compatible = "simple-bus";
			reg = <0x30800000 0x400000>;
			#address-cells = <1>;
			#size-cells = <1>;
			ranges;

			ecspi1: spi@30820000 {
                        ...
			i2c2: i2c@30a30000 {
				#address-cells = <1>;
				#size-cells = <0>;
				compatible = "fsl,imx8mp-i2c", "fsl,imx21-i2c";
				reg = <0x30a30000 0x10000>;
				interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
				clocks = <&clk IMX8MP_CLK_I2C2_ROOT>;
				

        

Lets check if this node is present on the target.

$ cd /sys/firmware/devicetree/base/soc@0/bus@30800000/i2c@30a30000
$ ls
#address-cells  adv7535@3d       clocks      interrupts              name            pinctrl-0      reg     tcpc@50
#size-cells     clock-frequency  compatible  lvds-to-hdmi-bridge@4c  ov5640_mipi@3c  pinctrl-names  status

Our camera is not listed there so its a device tree we need to fix first.

Step 2. Set device tree

To change device tree we need to jump into u-boot. Just restart the board and press any button when you see:
Hit any key to stop autoboot
Check and change boot-loader settings. First lets see what we have:

$ u-boot=> printenv
baudrate=115200                                                                 
board_name=EVK  
...
fastboot_dev=mmc2                                                               
fdt_addr=0x43000000                                                             
fdt_file=imx8mp-evk.dtb                                                         
fdt_high=0xffffffffffffffff                                                     
fdtcontroladdr=51bf7438                                                         
image=Image  
...
serial#=0b1f300028e99b32                                                        
soc_type=imx8mp                                                                 
splashimage=0x50000000                                                          
                                                                                
Environment size: 2359/4092 bytes  

As expected, u-boot uses the default device tree for our evaluation board. Let’s try to find the one with Basler camera config. I know it should sit in the eMMC, so I will start there.

$ u-boot=> mmc list
FSL_SDHC: 1
FSL_SDHC: 2 (eMMC)

$ u-boot=> mmc part

Partition Map for MMC device 2  --   Partition Type: DOS

Part    Start Sector    Num Sectors     UUID            Type
  1     16384           170392          a5b9776e-01     0c Boot
  2     196608          13812196        a5b9776e-02     83

$ u-boot=> fatls mmc 2:1
 29280768   Image
    56019   imx8mp-ab2.dtb
    61519   imx8mp-ddr4-evk.dtb
    61416   imx8mp-evk-basler-ov5640.dtb
    61432   imx8mp-evk-basler.dtb
    62356   imx8mp-evk-dsp-lpa.dtb
    62286   imx8mp-evk-dsp.dtb
    61466   imx8mp-evk-dual-ov2775.dtb
    61492   imx8mp-evk-ecspi-slave.dtb

We got it! Now its time to set is as a default one. And boot the board again.

$ u-boot=> setenv fdt_file imx8mp-evk-basler.dtb
$ u-boot=> saveenv                              
Saving Environment to MMC... Writing to MMC(2)... OK
$ u-boot=> boot

You can check in the directory we inspected last time that the camera hardware is present in the device tree.

Step 3. Get the image

Finally, we can get our image (a blob of pixels, to be clear). The easy way would be to connect the screen and run one of the NXP demo apps (though you need to flash your board with the full image to get them). But easy solutions are for people that do have their dev boards somewhere in the reach. Mine is running upstairs, and I prefer to do some extra typing than walking there. First, let’s check data formats supported by our camera.

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'YUYV' (YUYV 4:2:2)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [1]: 'NV12' (Y/CbCr 4:2:0)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [2]: 'NV16' (Y/CbCr 4:2:2)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)
        [3]: 'BA12' (12-bit Bayer GRGR/BGBG)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.033s (30.000 fps)

We can grab a raw data using following command:

$ v4l2-ctl --set-fmt-video=width=3840,height=2160,pixelformat=YUYV --stream-mmap --stream-count=1 --device /dev/video0 --stream-to=data.raw
<
ls .
data.raw

Copy the raw data file to your development machine and execute this simple python script that will extract the Y component out of the YUV422 image:

# yuv_2_rgb (does not really convert but good enough to check if cam is working)
import sys
from PIL import Image

in_file_name = sys.argv[1]
out_file_name = sys.argv[2]

with open(in_file_name, "rb") as src_file:
    raw_data = src_file.read()
    img = Image.frombuffer("L", (3840, 2160), raw_data[0::2])
    img.save(out_file_name)


# RUN THIS ON YOUR DEV PC/MAC:
$ scp root@YOUR_BOARD_IP:data.raw .
$ python3 yuv_2_rgb.py data.raw data.bmp
$ xdg-open data.bmp

You should see a black and white image of whatever your camera was pointing at.

Step 4. Movie time

Now it is time to get some moving frames. I will use the GStreamer to send the image from the camera to my laptop with 2 simple commands:

# RUN THIS ON YOUR IMX EVALUATION BOARD (replace @YOUR_IP@ with your ip address): 
$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,framerate=30/1,width=320,height=240 ! vpuenc_h264 ! rtph264pay ! udpsink host=@YOUR_IP@ port=5000

# RUN THIS ON YOUR DEV PC/MAC:
$ gst-launch-1.0 udpsrc port=5000 !  application/x-rtp ! rtph264depay ! avdec_h264 ! autovideosink

That’s it. We can see the word through the i.MX eyes/sensors. You can play with the stream settings (image size, frame rate, etc.) or pump the data into some advanced image processing software. Whatever you do, have fun!




* if you are interested in the device trees, there are some great materials from Bootlin

** at the moment of writing the DTS for Basler camera can be found eg. here but since NXP is busy with de-Freescalization I expect it to be moved to some imx folder in the future

*** device trees are constructed in a hierarchical way and for our board imx8mp.dtsi is the top most one


Code monkey detected (in 80%)

The story begins with The Things Network Conference. Signed in. Got an Arduino Portenta board (inclusive camera shield) and attended an inspiring Edge Impulse workshop about building an embedded, machine learning based, elephant detector.

Finding elephants in my room is not a very useful thing. First: I never miss one if it shows up. Second: no elephant has ever shown up in my room. But there is a monkey. It eats one banana in the morning to get energy and one banana in the evening before it goes to bed*. Between eating its bananas, it sits, codes, drinks coffee, codes, makes some exercises and codes even more. Let’s detect this monkey and make a fortune by selling its presence information to big data companies.

Checklist

Four years ago, I half-finished “Machine Learning” online training (knowledge: checked). Once I run an out-of-the-box cat detection model with the Caffe framework (experience: checked). I read the book** (book: checked). I have no clue what I will be doing, but hey, that is how code monkeys do work.

Step one: get Arduino

I use an Arduino board, so I need an Arduino IDE. I follow the steps described here and have a working IDE. Now I probably need to install some extra stuff.

Step two: extra stuff

Following online tutorials, I add Portenta board (Tools->Board->Boards Managment, type “mbed”, install Arduino “mbed-enabled Boards” package). I struggle for some time with the camera example. The output image does not seem to be right. I am stuck for some time trying to figure out why the image height should be 244 instead of 240. The camera code looks like a very beta thing. Maybe I am just using the wrong package version. I switch from 1.3.1 to 1.3.2. Example works.

Next, I add TensorFlow library. Tools->Manage Libraries and type “tensor”. From the list, I install Arduino_TensorFlowLite.

Step three: run “Hello World”

The book starts with an elementary example, which uses machine learning to control PWM cycle of a LED light. This example targets Arduino Nano 33 BLE device, but apparently, it also works on Portena without any code modifications. I select, upload it and stay there watching as the LED changes its brightness.

Step four: detect the monkey

After some time of watching red LED, I am ready to do some actual coding. First I switch to the person detection example (File->Examples->Arduino_TensorFlowLite->person_detection). Then, I modify arduino_image_provider.cpp file, which contains GetImage function, which is used to get the image out of our camera. I throw away all its content and replace it with a modified version of the Portenta CameraCaptureRawBytes example:

#include <mbed.h>
#include "image_provider.h"
#include "camera.h"

const uint32_t cImageWidth = 320;
const uint32_t cImageHeigth = 240;
uint8_t sync[] = {0xAA, 0xBB, 0xCC, 0xDD};

CameraClass cam;
uint8_t buffer[cImageWidth * cImageHeigth];

// Get the camera module ready
void InitCamera(tflite::ErrorReporter* error_reporter) {
  TF_LITE_REPORT_ERROR(error_reporter, "Attempting to start Arducam");
  cam.begin();
}

// Get an image from the camera module
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
                      int image_height, int channels, int8_t* image_data) {
  static bool g_is_camera_initialized = false;
  if (!g_is_camera_initialized) {
    InitCamera(error_reporter);
    g_is_camera_initialized = true;
  }

  cam.grab(buffer);
  Serial.write(sync, sizeof(sync));
  
  auto xOffset = (cImageWidth - image_width) / 2;
  auto yOffset = (cImageHeigth - image_height) / 2;
  
  for(int i = 0; i < image_height; i++) {
    for(int j = 0; j < image_width; j++) {
      image_data[(i * image_width) + j] = buffer[((i + yOffset) * cImageWidth) + (xOffset + j)];
    }
  }
    
  Serial.write(reinterpret_cast<uint8_t*>(image_data), image_width * image_height);

  return kTfLiteOk;
}

And a small change to the main program, so it shouts if the monkey is there.

...
  // Process the inference results.
  int8_t person_score = output->data.uint8[kPersonIndex];
  int8_t no_person_score = output->data.uint8[kNotAPersonIndex];

  if(person_score > 50 && no_person_score < 50) {
    TF_LITE_REPORT_ERROR(error_reporter, "MONKEY DETECTED!\n");
    TF_LITE_REPORT_ERROR(error_reporter, "Score %d %d.", person_score, no_person_score);
  }

I pass the image data to the model, and I send it via a serial port. I will use a simple OpenCV program to display my camera view (which can be useful when testing).

#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/video.hpp>

#include <iostream>
#include <thread>

#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>

using namespace cv;

const uint8_t cHeight = 96;
const uint8_t cWidth = 96;
uint8_t cSyncWord[] = {0xAA, 0xBB, 0xCC, 0xDD};

int main(int, char**)
{
    Mat frame, fgMask, back, dst;

    boost::asio::io_service io;
    boost::system::error_code error;

    auto port = boost::asio::serial_port(io);
    port.open("/dev/ttyACM0");
    port.set_option(boost::asio::serial_port_base::baud_rate(115200));

    std::thread t([&io]() {io.run();});

    while(true) {

        uint8_t buffer[cWidth * cHeight];

        uint8_t syncByte = 0;
        uint8_t currentByte;

        while (true) {

            boost::asio::read(port, boost::asio::buffer(&currentByte, 1));
            if (currentByte == cSyncWord[syncByte]) {
                syncByte++;
            } else {
                std::cerr << (char) currentByte;
                syncByte = 0;
            }
            if (syncByte == 4) {
                std::cerr << std::endl;
                break;
            }
        }

        boost::asio::read(port, boost::asio::buffer(buffer, cHeight * cWidth));

        frame = cv::Mat(cHeight, cWidth, CV_8U, buffer);

        if (frame.empty()) {
            std::cerr << "ERROR! blank frame grabbed" << std::endl;
            continue;
        }
        imshow("View", frame);

        if (waitKey(5) >= 0)
            break;
    }
    return 0;
}

I upload the script. Run the app. Point the camera at me. And… we got him! Monkey detected. I think I deserved a banana.

* bananas seem to possess some kind of fruit magic that makes them work according to user (eater) needs. Please google “boost your energy with banana” and “get better sleep with banana” if you want more details

** Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers by Pete Warden and Daniel Situnayake

Using CLion with Docker

The simple way is to follow this tutorial created by someone from the CLion team. There is a big chance you will get all the needed help there. You can also try to read this post where someone who has no clue what he is doing will try to set things up in his build environment. You can also try to follow this someone’s approach and write a short article on how to do things because writing is learning. Or was it something with shearing… never mind. Let’s go.

Assumption no. 1 – you have a CLion installed.

Assumption no. 2 – you have a Docker engine running on your machine.

This is our source file (example from imagemagick lib):

#include <Magick++.h> 
#include <iostream> 

using namespace std; 
using namespace Magick; 

int main(int argc,char **argv) 
{ 
  InitializeMagick(*argv);

  Image image;
  try { 
    image.read( "logo:" );
    image.crop( Geometry(100,100, 100, 100) );
    image.write( "logo.png" ); 
  } 
  catch( Exception &error_ ) 
    { 
      cout << "Caught exception: " << error_.what() << endl; 
      return 1; 
    } 
  return 0; 
}

This is our cmake config file:

cmake_minimum_required(VERSION 3.10)
project(image_cropper)

find_package(ImageMagick REQUIRED COMPONENTS Magick++)

add_executable(app main.cpp)
include_directories(${ImageMagick_INCLUDE_DIRS})
target_link_libraries(app ${ImageMagick_LIBRARIES})

And this is the error we get when we try to run cmake:

CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
  Could NOT find ImageMagick (missing: ImageMagick_Magick++_LIBRARY)

Not good. But instead of polluting our machine with the libmagick++-dev package, we will pollute it with a container (package inside). Containers are much easier to clean up, maintain and send to CI when needed. Our Docker config file looks like this:

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    build-essential \
    gcc \
    g++ \
    cmake \
    libmagick++-dev \
  && apt-get clean

And we can execute following commands to build our image and the application.

$ docker build  -t magic_builder .
$ docker run -v$PWD:/work -it magic_builder /bin/bash
$$ cd work
$$ mkdir build_doker && cd build_doker
$$ cmake .. && make -j 8

So far, so good, but our IDE is sitting in a corner looking sad as we type some shell commands. That is not how it is supposed to be. Come here CLion it is time for you to help us (ears up, tongue out, and it jumps happily into the foreground).

Step 1 Add ssh, rsync and gdb to the image. Also, add an extra user that can be used for opening ssh connection.

FROM ubuntu:18.04

RUN apt-get update \
  && apt-get install -y \
    gdb \
    ssh \
    rsync \
    build-essential \
...

RUN useradd -m user && yes password | passwd user

Step 2 Rebuild and start the container. Check if ssh service is running and start it if not.

$ docker build  -t magic_builder .
$ docker run --cap-add sys_ptrace -p127.0.0.1:2222:22 -it magic_builder /bin/bash
$$ service ssh status
 * sshd is not running
$$ service ssh start

Step 3 Now go to your IDE settings (Ctrl Alt S), section Build, Execution, Deployment, and add Remote Host toolchain. Add new credentials filling user name, password from Docker file, and port from the run command (2222).

If everything goes well, you should have 3 green checkmarks. Now switch to the new toolchain in your cmake profile (you can also add a separate profile if you want). That’s it. You can build, run and debug using your great CLion IDE. Some improvements are still to be done to our Docker image (auto-start of ssh, running in the background), but all of this is already somewhere on the Internet (same as this instruction but who cares).

Color recognition with Raspberry Pi

We know how to build. We know how to run. We even know how to use different architectures. Time to start doing something useful.

Our goal: deploy a simple, containerized application that will display a color seen by the Raspberry Pi’s camera*. Described application and a docker config file can be found here.

Because we target the ARMv6 architecture, I decided to base the docker image on Alpine Linux. It supports many different architectures and is very small in size (5MB for minimal version). Let’s add OpenCV lib and the target application on top of it. Below is Docker config file.

FROM arm32v6/alpine

ARG OPENCV_VERSION=3.1.0

RUN apk add --no-cache \
    linux-headers \
    gcc \
    g++ \
    git \
    make \
    cmake \
    raspberrypi

RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
    unzip ${OPENCV_VERSION}.zip && \
    rm -rf ${OPENCV_VERSION}.zip && \
    mkdir -p opencv-${OPENCV_VERSION}/build && \
    cd opencv-${OPENCV_VERSION}/build && \
    cmake \
    -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D WITH_FFMPEG=NO \
    -D WITH_IPP=NO \
    -D WITH_OPENEXR=NO \
    -D WITH_TBB=YES \
    -D BUILD_EXAMPLES=NO \
    -D BUILD_ANDROID_EXAMPLES=NO \
    -D INSTALL_PYTHON_EXAMPLES=NO \
    -D BUILD_DOCS=NO \
    -D BUILD_opencv_python2=NO \
    -D BUILD_opencv_python3=NO \
    .. && \
    make -j nproc && \
    make install && \
    rm -rf /opencv-${OPENCV_VERSION} 

COPY src/** /app/
RUN mkdir -p /app/build && cd /app/build && cmake .. && \ 
    make && \
    make install && \
    rm /app -rf 

ENTRYPOINT ["/usr/local/bin/opencv_hist"]
CMD ["0"] 

I set my binary as an entry point so it will run when the container is started. I also use CMD to set a default parameter which is a camera index. If not given, 0 will be used. Now we can build and push this image to the Docker hub (or another Docker registry).

$ docker build --rm -t 4point2software/rpi0 .
$ docker push 4point2software/rpi0

Assuming you have a camera attached and configured on our Pi0, you can execute the following lines on the target machine.

$ docker pull 4point2software/rpi0
$ docker run --device /dev/video0 4point2software/rpi0

This should result in an output that will change every time you present a different color to your camera.

A very nice thing about building applications inside containers is that they are completely independent of the target file system. The only requirement is the docker engine – all the rest we ship with our app. Different architecture needed? Just change the base image (eg. arm32v7/alpine). No need for setting up a new tool-chain, cross-compiling, and all related stuff. Think about sharing build environments, CI builds, and you will definitely consider this as something worth trying.

* I will be using a Raspberry Pi 0 with a camera module to get the peak value of the HS histogram