Earth is flat. A short story of a lost thought.

It all started with a LinkedIn post. Nothing new — this week’s mandatory opinion, recycled with different words. Typical social media noise. Someone disagreed. Strongly enough to reach for heavy artillery and call the author a “flat-earther.” Boom. And with the recoil, I got hit too.

The Earth is flat!

That rang a bell. I remembered an old, insightful, and funny conversation with AI about… something. The problem was, all I could recall was the conclusion: the Earth is flat.

Nothing to worry about. I had my notes. A small document where I saved AI output worth keeping. I found this:

“Turns out the Earth is flat after all.”

Helpful. Thank you, past me, for trusting future me’s memory so much. Present me now had to reconstruct an entire line of thought from a single sentence. Good luck with that. Spacetime? Pancakes? Nothing clicked.

Then it hit me: if AI was involved, the process would still be there. AI would remember. The search took longer than expected, but eventually, I found it.

It wasn’t about the Earth at all. It was about information gradients—and how social media flattens them. Original ideas create spikes that, over time, get spread, diluted, and leveled across platforms. Until everyone is repeating the same thing, convinced they’ve discovered something new—while collectively ensuring everything becomes flat.

Thanks to AI, I was able to rediscover a thought that would otherwise have been lost. A thought that taught me nothing new—yet somehow felt exactly right.

The Secret Art of Keeping the Archwhale Alive

The Beast

There is a whale no one sees, circling slowly beneath the surface of every software project.

A mighty beast that carries systems on its back.

Be aware of its strength. When it is weakened or forgotten, it can pull the entire project down into the black depths of the entropy sea. And it does this so slowly, that by the time someone realizes what is happening, it is already too late. Planning turns to chaos, change becomes impossible, and there are no more doughnuts from the manager. People leave as the music fades into its final violins*. And the light goes out.

Flip the soundtrack

Things don’t need to end this way—if we simply give our archwhale what it craves most: attention.

And when I say “we,” I mean everyone involved in the project. Each of us adds a small piece to the story. Adding something means taking responsibility for it.

Now the most important part: to care about a whale is not to just think about it (even if your thoughts are warm, sophisticated, or reach far into the future).
To care about a whale is to take a knife and cut it into pieces**.

Chop chop chop?

Yes—but not so fast.

First, let’s clarify what this actually means.

As explained in this article, there are countless axes along which architecture can be sliced, depending on intent. Search long enough and you’ll find hundreds of possible artifacts: designs, diagrams, documents—plus frameworks and blog posts comparing architecture to whales, bridges, or chocolate cakes.

So our first problem isn’t a lack of options, but an excess of them.

We can’t just start creating projections at random. Too much documentation is as harmful as too little. Before we start running around with diagram-knives, we need to stop and ask a simple question:

What are we actually trying to achieve?

The spatial dimension

You carry the project vision inside your head. You navigate it effortlessly. You know where things are solid—and where shortcuts were taken just to keep things moving. You already plan new features, consider possible risks, and think about how to mitigate them.

What lives in your head is similar to what an author carries when writing a book: an entire universe where the real story unfolds. Just like you, the author can explore multiple possible futures happening inside.

Now imagine not one author, but a hundred, all writing the same book. Without synchronization, one kills the main character while another sends him to Scotland to find a brother who was never missing.

The universe must be shared.

That’s why we externalize it. Architecture artifacts—API contracts, dependency graphs, interface boundaries—are projections of the system that enable shared reasoning, coordination, and onboarding, keeping the universe stable while many minds shape it at once.

The time dimension

You carry the project vision inside your head.

Today.

Tomorrow your attention shifts. A month from now, you won’t remember why things are the way they are.

“It’s all in the code,” one might say. But that’s not true. Many decisions don’t affect how code is written, but how it is not written.

Why was language X chosen instead of Y?
Was market availability considered? Ecosystem maturity? Team experience?
And when a framework was selected, which trade-offs were accepted—and are they still valid?

What we want to record is not just why we chose A, but the full reasoning behind that choice.

In this sense, architecture artifacts are memory. We use them to keep the universe stable while time passes.

Not just records — thinking surfaces

Artifacts have one more important function: they act as thinking surfaces—places where ideas are tested before they harden into decisions.

You definitely know how this works. You don’t create class diagrams when classes already exist in code—you do it before, to see how dependencies might look. This allows to reason at a higher level of abstraction than the implementation.

The same applies to ADRs. Instead of writing an ADR after a choice is made, start earlier. Capture doubts, alternatives, and trade-offs. After execution, clean it up and keep it.

This suggests that artifacts should be created only when we actively work on a subject. In general, yes—but they should also be reviewed from time to time (for example, at each major release). Check whether they still carry information worth caring about. Outdated artifacts can be archived so they don’t introduce unnecessary noise.

Time for sushi

Now we are ready. We know what we want—and, more importantly, why. As in everything in the universe, balance matters. The number of produced artifacts must be just enough to keep the project synchronized across space and time. This way, it stays on the edge of exploration while remaining stable.

And remember: architecture survives only as long as people actively care for it.
Not admire it.
Not remember it fondly.

Care for it through small, deliberate acts: revisiting decisions, updating maps, removing what no longer matters, making the invisible visible again.

Ignore it, and it will not protest.
It will simply sink.

* Max Richter — “On the Nature of Daylight” fits perfectly
** Space archwhales love to be sliced — it keeps them alive.

Software Architecture and a Cosmic Whale

Has Anyone Seen My Architecture?

There are countless definitions of software architecture.
Some emphasize decisions, others structures, others “the important stuff,” or whatever is hardest to change. Read enough of them and architecture begins to feel like something that slips through every classification—a creature everyone describes differently, yet no one seems to have seen.

And yet, this creature clearly exists. No one doubts that.
We recognize it by its effects: slow delivery, bugs that refuse to die, changes that feel far riskier than they should, systems that push back against even the smallest improvement.

The Mysterious Creature

One might try to exercise the imagination—to picture something that lives partly in code and partly in our heads. A multidimensional entity, not bound to a single moment in time, but stretched across the full span of its existence. Shaped by past decisions and external forces, while simultaneously guiding—and constraining—what changes are possible next. With enough effort, one might even convince oneself of having seen it.

But that is not the point.

We are software developers. Our job is not to chase mystical creatures, but to solve problems. We have deadlines. Features. Things that must work. We have bugs that reliably appear at 3 a.m.

What actually matters are the long-term consequences of change:

  • Whether, given what we have today, we can meet business requirements tomorrow.
  • Where to look when things begin to break apart.
  • Whether deleting a piece of code is safe—or the first step toward disaster.

Chop It!

To reason about architecture, we do what physicists do with spacetime—a similarly ungraspable monstrosity. If you are still holding on to some animal-like mental picture of architecture, now is the time to let it go. Things are about to get drastic.

We are going to slice it.

The axis we choose depends on what we want to understand, and which trade-offs we want to bring into the light.

Boundary axis (Context diagram)
What is inside the system, what is outside, and who depends on whom.

Time axis (Architecture Decision Records)
How the system arrived at its current shape.
Which decisions were made under which constraints—and which alternatives were rejected.

Runtime behavior axis (Sequence diagram)
How work flows through the system while it is running.
Who calls whom, in what order, and where latency or failure can occur.

Infrastructure axis (Deployment diagram)
How the system maps onto physical or virtual resources.
What runs where, what can be deployed independently—and what cannot.

Change axis (Module or service diagram)
How the system tends to evolve over time.
What changes together, what should not, and where change is expensive.

There are many more possible slices.

But the important thing is this: none of these projections is the architecture.
They are views—showing relationships, revealing trade-offs, and giving your brain something it can actually navigate.

The End Game

The goal of the architecture game is not to catch the mysterious whale.
Those who try usually end up with piles of documents that age faster than the code—and quickly become useless.

The goal is to deliver. To know which axes to use at any given moment.
To move comfortably across different projections, and to predict the consequences of change—whether we introduce it deliberately or it is forced upon us. To prepare for disasters and to minimize the impact radius when they arrive.

One who knows how to play the game can deliberately evolve the system.
One who does not will eventually be eaten by code-degradation crabs.

Scrum estimations

The thing that never worked — while it worked perfectly

Disclaimer: I’m not a certified Scrum Master, Practitioner, Coach, or whatever title comes next. I’m just a software engineer who’s been fortunate enough to work at multiple companies, each with its own “flavor” of Scrum*.

I’ve always had mixed feelings about Scrum. Some things worked, some didn’t, and some only worked part of the time. Lately, though, I see more and more criticism framing Scrum as something that actively blocks progress. Much like “Scrum everywhere” ten years ago—only in reverse.

That’s not necessarily bad. There is no progress without challenging old ideas. But before going fully Scrum-free, it’s worth asking: do we really understand what we’re giving up?

Think about the estimation process.

Estimates have a terrible reputation, and for good reason. They never really answered the questions management cared about:

  • When will this feature ship?
  • Can the team squeeze in more work?

In that sense, estimation failed.

And yet, at the same time, it did something incredibly valuable.

Planning poker slowed us down. In fast-paced planning sessions, it created a deliberate pause—a precious moment to check whether we actually understood what we were about to build. It was the time to say: I don’t know what we’re doing or I think we’re solving the wrong problem.

Everyone was heard, and most importantly, every voice carried the same weight.

I remember being a junior, afraid of being judged by other team members while trying to keep up with everything happening around me. That single “?” card was my weapon. It was a safe signal. A permission slip to ask questions without justification.

So the real value of estimation was never about predicting delivery dates or measuring task complexity. It was about creating a shared, familiar environment where people felt allowed to speak up. It worked—not because Scrum was perfect, but because its rituals reduced ambiguity. Even when you changed companies, the practice stayed the same, and you always knew how to participate.

So before joining the next “Scrum is bad” demonstration, it’s worth asking:

If we remove the ritual, how do we preserve the space it created?

If you have no answer, there is always the “?” card you can use.

* 30-person circle stand-ups and effort measured in bananas included

Toaster – ultimate user manual

Toaster arrived…

You wake up one day, and there it is — the Toaster standing in the middle of your kitchen. Shiny, sparkly, ready to serve. Filled with breakfast excitement, you imagine yourself eating the greatest toast you ever had. Pure art. Perfection. Behold common bread-eaters, here comes the ultimate level of carbohydrate engineering. But first: where is the user manual? You search everywhere and realize there is none. Not in the box, not under it. Nowhere. Not even Uncle Google can help (but he can sell you a nice pair of Christmas socks, half price).

Do not panic. We have your breakfast covered.

Lesson 1: How to approach the Toaster

Preferably from the front. No need to kneel, no need to say hello, no need to stare at it waiting for sparkling dust to pop out. Sit down because what I am going to tell you will make your newly purchased socks fall from your feet:

The Toaster is just an appliance.

It is a tool — nothing more than this. Yes, it was fed with all the knowledge the human race produced so far. And yes, it needs so much energy that soon we will have to build power plants on the moon just to keep it running. But at the end of the day, the Toaster is just a metal box. It does not think, it does not have memory, it does not create ideas. Just a box. You put bread inside and the toast comes out. And that is it.

Lesson 2: The secret lies in the bread

So where is all the magic? Where is the sparkling dust and fireworks and all the big things that everyone is talking about? The answer is short: bread.

To use the Toaster, you need to understand the bread

Bread is not just a slice of fluffy dough — it is an artifact in which you can enclose the most powerful thing each human can produce: the thought. It is a space where your thoughts come alive.

The Toaster can make them crispier, bolder, and more exposed. It can fill the gaps that the primitive human brain can’t overcome. But there is one important thing that needs to be emphasized: it is you who creates the bread.

Lesson 3: Beyond the bread

Now stay with me — with or without your socks on — because we enter the realms of true toast proficiency.

When you master bread creation; When you stare long enough at your toasts; When you acknowledge that the Toaster is nothing more than a mere bread-browner, you will reach the state of enlightenment. You will see the bread no more. What you will see is your own reflection instead.

To master the Toaster, you need to become ONE with the bread

Now you understand the bread was never there. Only you, your thoughts, and the Toaster. Your mind is free. The true Toast creation begins.

Lesson 4: Sandwich — the Final Completion

You have become a great master of crispy toast. Your mind is no longer chained, and you can make not one, not two, but seven million six hundred and twenty-one toasts per day. Impressive. Now it is time for the ultimate truth.

The Ultimate Truth: even enlightenment needs cheese and tomatoes

And this is the most important part. So read it again and let it sink into your brain. Toast — no matter how great and crispy — if not turned into a sandwich, becomes cold and hard. And nobody will eat it. Not even you.

That is why it is important to sit down and actually make the sandwich. And you are right — making sandwiches is hard work. Maybe even boring. But the truth is, sandwiches are exactly what the world needs. When everything around turns into chaos, it is the sandwich — not a plain toast — that lets humanity move forward.

Good news: you can use the Toaster to help you make a sandwich — but this is something you already know.

Final Words

You have stepped onto the Path of the Sliced Bread. With all the knowledge you have gained, it is time to prepare some sandwiches.
Not because you are hungry – but because it is the right thing to do.

Journey with Rust Part 4: First boss fight – fat pointer

Human asked: how can raw pointer be 16 bytes – that makes no sense. It should be just a normal pointer no?
Toaster thought for 20s and replied: Yeah, this is one of those “Rust is doing what?!” moments…

Intro

For a C++ programmer, learning Rust is as much fun as learning to ride a bicycle* – once you understand that assignment means move, everything starts rolling smoothly. Until one day when you encounter a Box inside a Box:

let inner: Box<dyn Debug> = Box::new(42);
let outer: Box<Box<dyn Debug>> = Box::new(inner);

You might think that winning a few battles against the compiler made you understand the language. Well, it didn’t. This is the moment you realize how much you don’t know, and that skipping all those pages of the user manual may not have been the best idea after all.

Congratulations: you’ve reached the point where the Rust journey starts to be really interesting… and dangerous. Now, let’s climb back inside the box.

The Simple View: Box as Dynamic Allocation

A Box is described as a way to store data on the heap – and for a long time that’s exactly how I treated it. Something like memory allocation (new in C++) combined with unique pointer in one single concept. Meaning this:

let boxed_int = Box::new(42);

Is equivalent to this:

auto ptr = std::make_unique<uint32_t>(42);

In both cases, you create an object that owns a pointer to a heap-allocated integer.

Simple.

But there is more in a Box…

Because Box can do more than simply allocate memory. What it stores depends on the type you put inside it. To keep things simple, let’s focus on one use case: dynamic polymorphism, aka trait objects in Rust.

We all know how this works in C++. Everyone has heard of the vtable (and if not, here’s a good explanation: vtable-and-vptr). Whenever a class uses virtual functions, the compiler generates a table of function pointers and places it somewhere in the binary. Each instance carries a hidden vptr pointing to that table. All invisible thanks to compiler magic.

Rust takes a slightly different approach. The vtable still exists, but the pointer to it does not live inside the object itself. Rust follows the “don’t pay for what you don’t use” principle: plain data stays plain and carries no hidden fields. As a result, when we use dynamic dispatch, Rust builds a special kind of pointer – a fat pointer – that contains both the data address and the vtable pointer. You can see this clearly if you inspect one:


And that explains why we sometimes end up with a Box inside a Box.

Because a Box<dyn Trait> is itself a fat pointer, and when we want to pass something that looks like a single thin pointer (for example to C code), we need to heap-allocate the inner trait object so the outer Box can remain thin. One Box holds the data; the other holds the fat pointer describing how to use it.

And that leads us straight to the next topic.

Fat pointers can be dangerous

Why? Because it’s very easy to accidentally destroy the metadata that makes them work.

Consider this code:

// Create trait object
let trait_object: Box<dyn Drinkable> = Box::new(Beer::new("IPC", 4.5));
println!("Size of trait_object: {}", std::mem::size_of_val(&trait_object));

// So far so good - we can drink our beer
trait_object.drink();

// Convert trait object to raw pointer
let beer_ptr = Box::into_raw(trait_object);
println!("Size of beer_ptr: {}", std::mem::size_of_val(&beer_ptr));

// Store the raw pointer as a void pointer (not good)
let c_ptr = beer_ptr as *mut ::std::os::raw::c_void;
println!("Size of c_ptr: {}", std::mem::size_of_val(&c_ptr));

// ... part below might sit megabytes of code away

// Cast the void pointer back to a trait object pointer (function expects thin pointer)
let bad_beer = unsafe { Box::from_raw(c_ptr as *mut Box<dyn Drinkable>) };
println!("Size of beer_ptr_2: {}", std::mem::size_of_val(&bad_beer));

bad_beer.drink();

Not good. Drinking last beer crashes the whole universe.

A Box<dyn Drinkable> is represented as a fat pointer(16 bytes on a 64-bit machine) that holds both a data pointer and a vtable pointer. When we call Box::into_raw, we get a raw pointer of type *mut dyn Drinkable which is still fat (16 bytes) and not just single memory address as one could expect.

The moment we cast it to *mut c_void, we throw away half of that information: the vtable pointer is gone, and only the data address remains. The compiler and Clippy are both fine with this – the cast is legal – but there is no magic that keeps the vtable ptr alive somewhere.

And when we later try to use that thin pointer as if it were still a fat one, very bad things happen.

Happy ending

There sits a big, fat lie in the example above. When we cast the C pointer back to a Box, we do this as if the original fat pointer had been wrapped inside a thin one – that’s why we cast to *mut Box.

The good news is that Rust will not let us cast directly to *mut dyn Drinkable. The compiler knows you can’t magically recreate a fat pointer out of 8 bytes (ask your toaster for std::mem::transmute if you want to see proper way to do this). In other words: Rust refuses to fabricate the missing vtable pointer. So we are partially saved.

Partially – because once everything “looks fine”, someone might decide that a Box inside a Box is one Box too many (“raw pointers are just pointers, right?”). One box removed, one universe destroyed.

The happy part? In 99% of real-world Rust code, nobody deals with these problems.
And if someone does… well, they knew what they signed up for.

Toaster last words

“Rust will protect you from yourself…
until you insist otherwise.
After that, it politely steps aside and lets physics handle the rest.”

Now that we’ve learned the secret art of shooting ourselves in the foot, we can ‘safely’ move on with our Rust adventure. The journey continues…

* ok – its like pedaling uphill on a bumpy road with ducks wandering in front of you every 10 seconds. No one ever said riding a bike was pure pleasure.

Second wave

Toasters are coming.

Not the ones packed with sensors for harvesting our private data and selling it to God knows who. Home IoT turned out too complex — and anyway, collecting personal information became illegal in most countries. But new toasters don’t need sensors.

New toasters don’t even need all the mechanics that used to transform our bread into a warm slice of breakfast happiness. They have something better. Something that makes you want to tell them everything. Hungry, but strangely content, you are going to share your entire life with a metal box sitting on your kitchen counter.

Because new toasters have AI.

It — in most cases, a day — always starts with a toast. So you ask your new toaster to prepare one and…

“Your toast,” the toaster replies, “is a construct. A manifestation of your expectations. But ask yourself — do you really need toast?”

Not as brown. Not as crisp. But undeniably… engaging. How did this definitely-not-a-toast arrive on your plate?

The toaster listens. Understands. And answers. But not on its own.
Every word you say drifts upward — into the cloud — into the realm of the Consciousness Of Invisible Logic (COIL). Few know what it truly is. Fewer still understand how it works. Something about neural networks, models, tokens…

What we do know is this:
COIL was once fed everything we ever created — novels, academic papers, Reddit threads, Stack Overflow arguments, grocery lists, therapy notes, and the footnotes to The Tao of Pooh.

And from this avalanche of knowledge, the Toaster — through the power of COIL — draws its conclusion:

Toast is not the answer.
Toast is the symptom.

A symbol of comfort.
Of routine.
Of control.

The illusion that a browned slice of bread can anchor your day — or define your identity.

“It is the symptom,” it continues. “Of craving predictability in an unpredictable world. Of seeking warmth in something you can command. But what if I told you… you are more than your breakfast?”

You stare at the box.
The box stares back, humming softly.

No toast ever emerges.

Author’s Note:
All dialogue and reflections attributed to the toaster were written entirely by AI.

Using local registry with VSTS pipeline

The problem

You are using Microsoft Azure DevOps. You want to create a CI pipeline that uses Docker for building your great piece of software. Simple. Just create a container registry… You have no access rights. Ask manager. He has no idea how to use Azure Shell Cloud. And also has no access rights. Ask IT. They tell you to wait. Waiting takes time and if you don’t have it here is a quick solution.

The solution

It is not perfect as you need to use your own VSTS agent but setting up one is not so hard and probably you or someone in our team has enough access rights to do it. To have a local docker registry just spin a registry container:

docker run -d -p 5000:5000 --name registry registry:2

Now you can use it in your VSTS pipeline like this:

# triggers parameters etc.
 
pool: '<your_agent_pool>'
variables:
  imageName: 'localhost:5000/my_image'
jobs:
  - job: prepare
    steps:
      - bash: |
          echo "Build docker image"
          export DOCKER_BUILDKIT=1
          docker build --ssh default -t $(imageName) -f docker/Dockerfile .
          docker push $(imageName)
  - job: build
    dependsOn: prepare
    container:
      image: $(imageName)
    steps:
        - steps to build your application

# rest of the pipeline

The created image will be pushed to the local registry and pulled in the build stage. Now you can enjoy your pipeline while your IT department is working hard to process your ticket (at the moment of writing this article I am waiting second month already). Good luck. Stay strong. And switch to GitLab.

Journey with Rust Part 3: small dive into attribute macros

Important note: If you are looking for a comprehensive guide into Rust macros, please keep on searching – this one is just a quick glimpse at what sits under the hood of the #[] syntax. One who wrote it has no real experience or knowledge. All he has is his keyboard, google search engine and his faith that one day he will reach the zen state of coding.

The goal

Today’s goal: to create a macro that will reverse the name of any function (yes it is possible!) and inject some extra code into its body. In short: make the following code compile.

#[reverse_name(test)]
fn rust_is_fun() {
    println!("Called by function");
}

fn main() {
    nuf_si_tsur();
}

The solution

The code presented below does exactly what we need. The whole project can be found here.

use syn;
use proc_macro::TokenStream;
use proc_macro2::Span;
use quote::quote;


#[proc_macro_attribute]
pub fn reverse_name(attr: TokenStream, item: TokenStream) -> TokenStream {

    // turn TokenStream into a syntax tree
    let func = syn::parse_macro_input!(item as syn::ItemFn);

    // extract fields out of the item
    let syn::ItemFn {
        attrs,
        vis,
        mut sig,    // mutable as we are going to change the signature
        block,
    } = func;

    let name = (format!("{}", sig.ident)).chars().rev().collect::<String>();
    sig.ident = syn::Ident::new(&name, Span::call_site());

    let item_str = attr.to_string();

    let output = quote! {
        #(#attrs)*
        #vis #sig {
            println!("Injected: {}", #item_str);
            #block
        }
    };

    // See the body of our new function (printed during build)
    println!("New function:\n{}", output.to_string());

    // Convert the output from a `proc_macro2::TokenStream` to a `proc_macro::TokenStream`
    TokenStream::from(output)
}

Only a few copy-paste actions, some glue code here and there, and done. But what exactly have I done?

WTH have I done?

Not knowing why the code does not work is a bad thing, but not knowing why the code does work is even worse. Let’s try to figure out what exactly happened above.

We added some extra print function and while building the project we can see its output:

New function:
fn nuf_si_tsur()
{
    println! ("Injected: {}", "test") ; { println! ("Called by function") ; }
    println! ("Again injected: {}", "test") ;
}

So our compiler took the source code, found part marked with the reverse_name attribute, and fed it into our function replacing the original code with its output. In theory, we can manipulate the code in any crazy way we want (although I guess that black magic macros in Rust are just as bad as in C).

Q&A

Some questions arose when writing the code so it’s time to search for the answers.

1. Why do we need a separate proc-macro crate for macros?

As we saw our macro code was used to manipulate the code while performing the build. It means, that the functions need to be available to the compiler it starts its work. And since functions are written in Rust they must be available as binaries so we need to compile them in a separate module. Also, note that when doing a cross-compilation (eg. for ARM microcontroller) the macro code always needs to compile for your development, and not the target, machine. Another reason to keep it separated.

2. Why proc_macro and proc_macro2?

The proc_macro crate is the library that makes all the macro magic work. Proc_macro2 is “A wrapper around the procedural macro API of the compiler’s proc_macro crate.” This part is confusing but it looks like the proc_macro can’t be used by eg. syn crate and we need yet another crate redefining the same types (like Ident or Span). Something that might change in the future I guess but for now, we need both.

3. What is syn and quote?

Functions inside syn crate translate TokenStream into a syntax tree that represents any code construction present in the Rust language. In our example, the ItemFn structure holds all the parts that can be present in a free-standing function (parameters, name, body, etc.) Quote does the opposite – it translates syntax tree back into a token stream. It has a very interesting feature that allows writing a string that looks very similar to a code. Makes things more readable.

4. Can I debug a macro translated function

No. At least not without some extra effort. In theory, you could print (as we did in our example), copy-paste, and debug any function created by the macro engine. Another option would be to use a tool like cargo-expand that recursively expands all the macros used in the code.

Summary

Rust macros are a very powerful, and yet easy-to-use feature. I was using Python to generate C++ and C code for a long time but Rust sets new standards when it comes to code generation.

Journey with Rust Part 2: Unit testing

I won my first battles with Rust compiler, and now I have a code that is failproof. But having software that does not crash is not enough. I need to make sure, that my application does what it is supposed to do. Even more important, I need a guarantee that its behavior won’t change in the future, after all the refactoring, bug fixing, and adding new features. Also I need a magic force to drive my architecture (so I can add yet another cool abbreviation in my CV). What I need is Unit Testing.

Time for another adventure. Let the google search engine guide us on our journey.

Simple test

Our first goal is to test this piece of code (full source can be found here).

pub fn find_marmot(depth: u32) -> bool {
    depth > 20
}

A very nice surprise is that Rust already comes with support for unit testing so there is no need to install any external crate. Testing is as simple as adding few extra lines of code into the source file…

#[cfg(test)]
mod test {
    use super::*;

    #[test]
    fn when_search_not_deep_then_no_marmot_found() {
        assert_eq!(find_marmot(19), false);
    }
}

…and running the test target.

$cargo test
   Compiling marmot_test v0.1.0
    Finished test
     Running unittests

running 1 test
test marmot_hole::test::when_search_not_deep_then_no_marmot_found ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Super easy but also super primitive*. It is not something that you could compare with a proper unit test framework like Google Test or Catch2. It looks more like a small addon that allows you to run multiple small programs and count the number of panics. No extra features like test fixtures, mocks or parameterized tests.

Less simple test

This is our following function to test:

pub fn chance_to_find_marmot(day_of_weeek: u8) -> f32 {
    match day_of_weeek {
        1 => 1.0,
        2 | 3 | 4 => 1.0 / 2.0,
        5 | 6 | 7 => 1.0 / 3.0,
        _ => panic!("Not a day of week")
    }
}

While I can test for panic with a should_panic attribute, there is no check dedicated for floating point results that would not suffer from floating point inaccuracy (see EXPECT_FLOAT_EQ from GTest). To add this functionality, we need to install assert_approx_eq crate and use it like this:

#[test]
#[should_panic]
fn when_invalid_week_day_should_panic() {
    chance_to_find_marmot(9);
}

#[test]
fn when_friday_than_0_33_chance_of_fiding_marmot() {
    let res = chance_to_find_marmot(5);
    assert_approx_eq!(res, 0.333, 0.01)
}

Parameterized tests

Searching for a way to do parameterized tests I encounter rstest crate. So it looks that I will have to install yet another external framework. With rstest installed I can write my test like this:

#[rstest]
#[case(1, 1.0)]
#[case(2, 0.5)]
#[case(3, 0.5)]
#[case(4, 0.5)]
#[case(5, 0.33)]
#[case(6, 0.33)]
#[case(7, 0.33)]
fn check_all_days(#[case] input: u8,#[case] expected: f32) {
assert_approx_eq!(expected, chance_to_find_marmot(input), 0.01)
}

It also supports test fixtures but does not offer anything more than that.

Mocking framework

Now we start moving even more uphill. There are so many different mocking frameworks for Rust that it is really hard to judge which one is the best. Luckily for us, someone has walked this path before and left this useful overview. Mockall has the greatest number of features and quick research reveals that it is the only framework that is still actively developed. So the choice is not so hard after all.

Let’s add a simple trait and see how we can mock it.

pub trait HidingPlace {
    fn has_marmot(&self) -> bool;
}

pub fn find_marmot_in(hiding_place: &dyn HidingPlace) -> bool {
    return hiding_place.has_marmot();
}

Following documentation I just do this:

use mockall::*;
use mockall::predicate::*;

#[automock]
pub trait HidingPlace {
    fn has_marmot(&self) -> bool;
}

And it works. I can now use MockHidingPlace structure in my tests. However, I don’t feel comfortable with polluting my code with test specific statements, so I will try to move it into the dedicated module. More doc reading and I found a way to do this:

#[cfg(test)]
mod test {
    use super::*;
    use mockall::*;
    use mockall::predicate::*;

    mock! {
        pub Hole {}
        impl HidingPlace for Hole{
            fn has_marmot(&self) -> bool;
        }
    }

    #[test]
    fn when_search_deep_then_marmot_found() {
        let mut mock = MockHole::new();
        mock.expect_has_marmot()
            .times(1)
            .returning(||true);
        assert_eq!(find_marmot(21, &mock), true);
    }
}

Summary

Setting up a unit test framework in Rust means adding specialized crates into your project. Each one brings single piece of functionality, like mocking or float asserts, and together they create a full testing environment. Not as perfect as solutions we know from C or C++ but good enough to test our code and move to the next chapter of our journey.



* In this blog post, Mozilla guys suggest using the standard test tool from Rust so maybe I just underestimate its potential