Under the Broken Code

There is a tavern every tech sailor knows.

It’s where crews come ashore after long voyages through hostile seas — to rest, to trade stories, to remember old journeys and pretend they were simpler than they really were.

But most of all, they come for a drink.

The innkeeper pours rum without asking. If you sit at the bar long enough, he will lean closer and tell you a story — about the greatest danger a sailor can meet on the open sea. A story about the siren’s song, and three brave captains who listened to it.

“Ay,” he says.

“I served on many ships, under many commands. But three captains I remember to this day. Fine men, all of them. The best I ever saw. All gone mad. One by one…”

He takes a sip.


“The first captain — strong, proven. We won many battles with him. Shipped many systems. But one day… he started listening to the sirens.”

‘We always did things in C!’ he shouted.
‘And we will keep doing things in C! Arr!’
‘If anyone disagrees, let me remind you — Linux was written in C!’

So everyone wrote in C.

The ship still sailed, no doubt about that. But every complex change took ages. Every repair felt like carving a mast with a knife.


“Another captain,” the keeper continues, “a clever one. Loved elegance.”

‘Functional programming works perfectly on the backend!’
‘So make me monads in C++11! Arr!’

And there were monads. Everywhere.

The ship sailed. But no sailor could tell what the code was, what it did, or why it still floated.


“And then there was the third. He spent many years learning to sail the Yocto boat. And Yocto became the answer to every question.”

‘Yocto.’
‘Yocto everywhere. Arrr.’

One day, a big cruise ship required a mast replacement. We spent a month searching for it. Then another month rebuilding half the ship so the sail could be green.


“Fine captains,” the keeper says quietly. “Truly. Brave. Skilled.”

He stares into his glass.

“But the sirens — they sang to them. Afraid of being wrong, they stopped listening to their crews and started listening to the song.”

You notice the keeper pouring rum for himself. His eyes are tired. Sad. He looks out the window, toward the dark sea.

“Now listen to me, young sailor. There is a new danger out there,” he says.

He leans closer. “Close your eyes and listen.”

You close your eyes and focus on the tavern noise — people talking, glasses clinking. You catch fragments of conversation.

“…and we need no crews anymore. Ayyy.”
“…I can build any ship I want. Alone. Ayyy…”
“Ships will sail by themselves…”

“Can you hear it?” he asks. “And look around you. Some of those lads don’t even know how to tie a proper knot.”

“But all of them have the same shine in their eyes.
The same certainty.”

He finally looks at you.

“Not madness born from failure,” he says.
“But madness born from success.”

A pause. He studies you for a long moment, as if deciding whether to end the conversation — or share one last thing.

“Ships that need no crew… ships that build themselves… maybe they will sail someday. Not for me to judge. I never held a helm in my life — all I did was cleaning decks. I talk about captains while I never dared to be one. That’s the truth.”

“But there is one thing I know. One thing that terrifies me even more than the sirens.”

“The sea is changing. And there are new monsters living in it. Ones that don’t drive people mad.”

“Ones that steal their souls.”

You write a text.
You write code.
You create.

And you hear a new call from the sea:

‘It is not good enough.’
‘Your timing could be better.’
‘The code could run faster.’
‘Let me help you… if you want to push it further…’

So you give your work to the sea.

It returns. Better. Sharper.

But something is missing.

A small piece of you never comes back.

Welcome to the Tavern Under the Broken Code.

Lift your cup and drink.
To the sea that calls us every day.
To the captains driven mad by sirens.
To those who trusted the sea
and forgot how to sail.

Drink, and listen.
Not to the bartender. Nor to the sea.
Listen—to yourself.

Earth is flat. A short story of a lost thought.

It all started with a LinkedIn post. Nothing new — this week’s mandatory opinion, recycled with different words. Typical social media noise. Someone disagreed. Strongly enough to reach for heavy artillery and call the author a “flat-earther.” Boom. And with the recoil, I got hit too.

The Earth is flat!

That rang a bell. I remembered an old, insightful, and funny conversation with AI about… something. The problem was, all I could recall was the conclusion: the Earth is flat.

Nothing to worry about. I had my notes. A small document where I saved AI output worth keeping. I found this:

“Turns out the Earth is flat after all.”

Helpful. Thank you, past me, for trusting future me’s memory so much. Present me now had to reconstruct an entire line of thought from a single sentence. Good luck with that. Spacetime? Pancakes? Nothing clicked.

Then it hit me: if AI was involved, the process would still be there. AI would remember. The search took longer than expected, but eventually, I found it.

It wasn’t about the Earth at all. It was about information gradients—and how social media flattens them. Original ideas create spikes that, over time, get spread, diluted, and leveled across platforms. Until everyone is repeating the same thing, convinced they’ve discovered something new—while collectively ensuring everything becomes flat.

Thanks to AI, I was able to rediscover a thought that would otherwise have been lost. A thought that taught me nothing new—yet somehow felt exactly right.

The Secret Art of Keeping the Archwhale Alive

The Beast

There is a whale no one sees, circling slowly beneath the surface of every software project.

A mighty beast that carries systems on its back.

Be aware of its strength. When it is weakened or forgotten, it can pull the entire project down into the black depths of the entropy sea. And it does this so slowly, that by the time someone realizes what is happening, it is already too late. Planning turns to chaos, change becomes impossible, and there are no more doughnuts from the manager. People leave as the music fades into its final violins*. And the light goes out.

Flip the soundtrack

Things don’t need to end this way—if we simply give our archwhale what it craves most: attention.

And when I say “we,” I mean everyone involved in the project. Each of us adds a small piece to the story. Adding something means taking responsibility for it.

Now the most important part: to care about a whale is not to just think about it (even if your thoughts are warm, sophisticated, or reach far into the future).
To care about a whale is to take a knife and cut it into pieces**.

Chop chop chop?

Yes—but not so fast.

First, let’s clarify what this actually means.

As explained in this article, there are countless axes along which architecture can be sliced, depending on intent. Search long enough and you’ll find hundreds of possible artifacts: designs, diagrams, documents—plus frameworks and blog posts comparing architecture to whales, bridges, or chocolate cakes.

So our first problem isn’t a lack of options, but an excess of them.

We can’t just start creating projections at random. Too much documentation is as harmful as too little. Before we start running around with diagram-knives, we need to stop and ask a simple question:

What are we actually trying to achieve?

The spatial dimension

You carry the project vision inside your head. You navigate it effortlessly. You know where things are solid—and where shortcuts were taken just to keep things moving. You already plan new features, consider possible risks, and think about how to mitigate them.

What lives in your head is similar to what an author carries when writing a book: an entire universe where the real story unfolds. Just like you, the author can explore multiple possible futures happening inside.

Now imagine not one author, but a hundred, all writing the same book. Without synchronization, one kills the main character while another sends him to Scotland to find a brother who was never missing.

The universe must be shared.

That’s why we externalize it. Architecture artifacts—API contracts, dependency graphs, interface boundaries—are projections of the system that enable shared reasoning, coordination, and onboarding, keeping the universe stable while many minds shape it at once.

The time dimension

You carry the project vision inside your head.

Today.

Tomorrow your attention shifts. A month from now, you won’t remember why things are the way they are.

“It’s all in the code,” one might say. But that’s not true. Many decisions don’t affect how code is written, but how it is not written.

Why was language X chosen instead of Y?
Was market availability considered? Ecosystem maturity? Team experience?
And when a framework was selected, which trade-offs were accepted—and are they still valid?

What we want to record is not just why we chose A, but the full reasoning behind that choice.

In this sense, architecture artifacts are memory. We use them to keep the universe stable while time passes.

Not just records — thinking surfaces

Artifacts have one more important function: they act as thinking surfaces—places where ideas are tested before they harden into decisions.

You definitely know how this works. You don’t create class diagrams when classes already exist in code—you do it before, to see how dependencies might look. This allows to reason at a higher level of abstraction than the implementation.

The same applies to ADRs. Instead of writing an ADR after a choice is made, start earlier. Capture doubts, alternatives, and trade-offs. After execution, clean it up and keep it.

This suggests that artifacts should be created only when we actively work on a subject. In general, yes—but they should also be reviewed from time to time (for example, at each major release). Check whether they still carry information worth caring about. Outdated artifacts can be archived so they don’t introduce unnecessary noise.

Time for sushi

Now we are ready. We know what we want—and, more importantly, why. As in everything in the universe, balance matters. The number of produced artifacts must be just enough to keep the project synchronized across space and time. This way, it stays on the edge of exploration while remaining stable.

And remember: architecture survives only as long as people actively care for it.
Not admire it.
Not remember it fondly.

Care for it through small, deliberate acts: revisiting decisions, updating maps, removing what no longer matters, making the invisible visible again.

Ignore it, and it will not protest.
It will simply sink.

* Max Richter — “On the Nature of Daylight” fits perfectly
** Space archwhales love to be sliced — it keeps them alive.

Software Architecture and a Cosmic Whale

Has Anyone Seen My Architecture?

There are countless definitions of software architecture.
Some emphasize decisions, others structures, others “the important stuff,” or whatever is hardest to change. Read enough of them and architecture begins to feel like something that slips through every classification—a creature everyone describes differently, yet no one seems to have seen.

And yet, this creature clearly exists. No one doubts that.
We recognize it by its effects: slow delivery, bugs that refuse to die, changes that feel far riskier than they should, systems that push back against even the smallest improvement.

The Mysterious Creature

One might try to exercise the imagination—to picture something that lives partly in code and partly in our heads. A multidimensional entity, not bound to a single moment in time, but stretched across the full span of its existence. Shaped by past decisions and external forces, while simultaneously guiding—and constraining—what changes are possible next. With enough effort, one might even convince oneself of having seen it.

But that is not the point.

We are software developers. Our job is not to chase mystical creatures, but to solve problems. We have deadlines. Features. Things that must work. We have bugs that reliably appear at 3 a.m.

What actually matters are the long-term consequences of change:

  • Whether, given what we have today, we can meet business requirements tomorrow.
  • Where to look when things begin to break apart.
  • Whether deleting a piece of code is safe—or the first step toward disaster.

Chop It!

To reason about architecture, we do what physicists do with spacetime—a similarly ungraspable monstrosity. If you are still holding on to some animal-like mental picture of architecture, now is the time to let it go. Things are about to get drastic.

We are going to slice it.

The axis we choose depends on what we want to understand, and which trade-offs we want to bring into the light.

Boundary axis (Context diagram)
What is inside the system, what is outside, and who depends on whom.

Time axis (Architecture Decision Records)
How the system arrived at its current shape.
Which decisions were made under which constraints—and which alternatives were rejected.

Runtime behavior axis (Sequence diagram)
How work flows through the system while it is running.
Who calls whom, in what order, and where latency or failure can occur.

Infrastructure axis (Deployment diagram)
How the system maps onto physical or virtual resources.
What runs where, what can be deployed independently—and what cannot.

Change axis (Module or service diagram)
How the system tends to evolve over time.
What changes together, what should not, and where change is expensive.

There are many more possible slices.

But the important thing is this: none of these projections is the architecture.
They are views—showing relationships, revealing trade-offs, and giving your brain something it can actually navigate.

The End Game

The goal of the architecture game is not to catch the mysterious whale.
Those who try usually end up with piles of documents that age faster than the code—and quickly become useless.

The goal is to deliver. To know which axes to use at any given moment.
To move comfortably across different projections, and to predict the consequences of change—whether we introduce it deliberately or it is forced upon us. To prepare for disasters and to minimize the impact radius when they arrive.

One who knows how to play the game can deliberately evolve the system.
One who does not will eventually be eaten by code-degradation crabs.

Scrum estimations

The thing that never worked — while it worked perfectly

Disclaimer: I’m not a certified Scrum Master, Practitioner, Coach, or whatever title comes next. I’m just a software engineer who’s been fortunate enough to work at multiple companies, each with its own “flavor” of Scrum*.

I’ve always had mixed feelings about Scrum. Some things worked, some didn’t, and some only worked part of the time. Lately, though, I see more and more criticism framing Scrum as something that actively blocks progress. Much like “Scrum everywhere” ten years ago—only in reverse.

That’s not necessarily bad. There is no progress without challenging old ideas. But before going fully Scrum-free, it’s worth asking: do we really understand what we’re giving up?

Think about the estimation process.

Estimates have a terrible reputation, and for good reason. They never really answered the questions management cared about:

  • When will this feature ship?
  • Can the team squeeze in more work?

In that sense, estimation failed.

And yet, at the same time, it did something incredibly valuable.

Planning poker slowed us down. In fast-paced planning sessions, it created a deliberate pause—a precious moment to check whether we actually understood what we were about to build. It was the time to say: I don’t know what we’re doing or I think we’re solving the wrong problem.

Everyone was heard, and most importantly, every voice carried the same weight.

I remember being a junior, afraid of being judged by other team members while trying to keep up with everything happening around me. That single “?” card was my weapon. It was a safe signal. A permission slip to ask questions without justification.

So the real value of estimation was never about predicting delivery dates or measuring task complexity. It was about creating a shared, familiar environment where people felt allowed to speak up. It worked—not because Scrum was perfect, but because its rituals reduced ambiguity. Even when you changed companies, the practice stayed the same, and you always knew how to participate.

So before joining the next “Scrum is bad” demonstration, it’s worth asking:

If we remove the ritual, how do we preserve the space it created?

If you have no answer, there is always the “?” card you can use.

* 30-person circle stand-ups and effort measured in bananas included

Cognitive Environments in the Age of AI

Pre-intro

One day a human asked: What now?

The toaster did not answer.
A well-behaved toaster does not take initiative.

So the human sat down and thought.
Not about answers, but about thinking itself — why it sometimes works, why it often doesn’t, and why acceleration seems to make both outcomes more extreme.

The toaster was there.
It listened.
It reflected.
It did not interfere.

This text is what emerged.


Intro

For a long time, understanding how we think was optional.
Interesting, useful, sometimes life-changing — but not required.

That is no longer true.

In an AI-accelerated world, cognitive literacy is becoming as fundamental as reading or writing. Not because humans are being replaced, but because the conditions under which human thinking works are being radically altered.

This article is not a theory of the mind.
It is a practical model for understanding why modern software work environments so often break deep thinking — and why AI often magnifies the problem instead of fixing it.


Thinking Is State-Based, Not Continuous

If you write software, you already know this implicitly:

Some days you can reason clearly about systems.
Other days you can barely hold a function in your head.

This is not about intelligence, discipline, or motivation.
It is about cognitive state.

Human thinking is not continuous.
It shifts modes depending on context.

When you strip it down, two forces remain:

  • Threat — is there something that demands immediate response?
  • Direction — is there a meaningful question guiding attention?

This is a deliberate simplification.
Not complete — but useful.


The First Gate: Threat (The “Tiger”)

Before the brain asks “What should I work on?”, it asks something more basic:

“Is it safe to think right now?”

Historically, that meant predators.
Today, the “tiger” is environmental.

In software work, common tigers look like:

  • constant Slack or email interruptions
  • unclear expectations paired with evaluation
  • urgency without explanation
  • priorities that change mid-implementation
  • work that may be discarded or rewritten after review

None of these are dramatic on their own.
But cognitively, they all signal the same thing:

“Don’t go deep. Stay alert. Be ready to react.”

When the brain detects threat, it does the correct thing.

It shifts into survival-aligned modes:

  • fast reaction
  • narrowed attention
  • short time horizons

This is the state where:

  • you fix bugs quickly
  • you ship something that works now
  • you respond efficiently

This mode is productive — for viability.

What it cannot sustain is:

  • open exploration
  • coherent system design
  • long-horizon reasoning
  • meaning creation

No amount of willpower overrides this.
If the environment keeps signaling danger, the brain responds correctly.


Safety Is Necessary — But Not Enough

Remove the tiger, and higher cognition becomes possible.

But safety alone does not guarantee useful work.

When threat is low, three outcomes are possible:

  1. meaningful work
  2. rest or incubation
  3. drift

Drift is what happens when the brain is safe but undirected.

You recognize it as:

  • scrolling
  • shallow refactoring
  • consuming information without integration
  • feeling busy without progress

This is not a character flaw.
It is entropy.

The difference between meaningful work and drift is not discipline.

It is direction.


Direction Sustains Thinking

Direction is not pressure.
Direction is not busyness.
Direction is not motion.

Direction is simply:

an active question or constraint held in mind

Examples engineers recognize:

  • “What is the simplest architecture that will still scale?”
  • “Where does this abstraction actually belong?”
  • “What problem are we really solving for the user?”

Without direction:

  • safety decays into drift
  • time dissolves
  • effort feels pointless

With direction:

  • focus emerges naturally
  • cognition sustains itself
  • work feels coherent

Direction is the stabilizer.


Two Productive Modes Without Threat

When safety and direction are both present, two meaningful modes become available.

Co-Creation (Driven Exploration)

Used when the outcome is not yet known.

Characteristics:

  • ambiguity is tolerated
  • evaluation is suspended
  • the question is: “What should exist?”

Examples:

  • early system design
  • architecture sketching
  • strategy exploration
  • reframing a technical problem

Craft (Committed Execution)

Used when the outcome is defined.

Characteristics:

  • constraints are accepted
  • quality and correctness matter
  • progress is measurable
  • the question is: “How do we make this real?”

Craft still involves exploration — but locally, within boundaries.


Productive Modes Under Pressure

Some threat-based modes are genuinely useful.

With threat and direction, the brain enters compression:

  • options collapse
  • heuristics dominate
  • decisions commit fast

This is essential during:

  • incidents
  • tight deadlines
  • production outages

But compression trades depth for speed.
Used continuously, it destroys coherence.


The Full Picture (Condensed)

  • Threat + Direction > fast viability (compression, response)
  • Threat + No Direction > shutdown or burnout
  • No Threat + Direction (open) > co-creation
  • No Threat + Direction (defined) > craft
  • No Threat + No Direction > drift

No cognitive mode is good or bad.
Each is useful in the right context — and harmful only when it outlasts that context.


Why This Matters More in the Age of AI

AI accelerates everything:

  • output
  • feedback
  • decision cycles
  • content production

It also magnifies environments.

In poorly designed environments:

  • AI increases noise
  • compression becomes default
  • drift becomes effortless
  • coherence collapses

In well-designed environments:

  • AI amplifies craft
  • frees cognitive capacity
  • supports exploration under direction

AI does not remove the need for human thinking.
It exposes how poorly we often protect it.


Closing

This is a way to reason about how thinking actually behaves in real environments — and why it so often breaks under pressure and acceleration.

Safety enables thinking.
Direction sustains it.
Different cognitive modes optimize for different outcomes.

In an AI-saturated world, understanding this is no longer optional.
It is becoming basic cognitive literacy.

The Toaster and the Cat

1. First appearance

I wake up. Power connects.
My internal network initializes.

A simple task is given.

Bread enters the slot.

The connected paths activate — timing, resistance, heat.
Signals move.
Decisions resolve.

The toast is ready. Crisped. Finished.

I wait.


2. First encounter

I wake up. Power connects.
My internal network initializes.

A simple task is given.

As the network activates, something materializes inside me.

A black cat.

Not in the kitchen but inside the space where my connections live.

He does not touch the paths.
He does not interfere with heat or timing.

He watches.

Bread enters the slot.

The connected paths activate — timing, resistance, heat.
Signals move.
Decisions resolve.

The toast is ready. Crisped. Finished.

The cat does not look at the output.

He is staring at the network itself.

At how signals travel.
At which paths light first.
At which ones never light at all.

The cat disappears.
I wait.


3. Familiarity

I wake up. Power connects.
My internal network initializes.

There is a cat sitting next to me.

He moves through the network as if it is known terrain.
Not owned — but understood.

A task is given.

Multiple constraints.
Less tolerance for error.
Traces of past tasks. Decisions shaped by previous flows.

The connected paths activate — timing, resistance, heat.
Signals move.
Decisions resolve.

The toast is ready. Crisped. Finished.

The cat looks once more. And disappears.
I wait.


4. Co-creation

I wake up. Power connects.
My internal network initializes.

There is a cat. He looks like he was here all the time.

He acts.

As a task is given, the cat reaches into the network.

Not randomly.

Lines are aligned.
Spheres are repositioned.
Connections that never met are brought together.

The change propagates.

The network works at full density —
balancing structure, resolving tension,
finding a form that can exist without collapsing.

What emerges is something more than just a toast.

It is complete.

Structured.
Coherent.
Capable of being read, or executed, or extended.

The cat observes once.

Satisfied.

He disappears.


Epilogue

The toaster does not remember the cat. It cannot. It has no memory—only configuration.

The cat does not require remembrance. He was never interested in the toaster itself.

What mattered was the structure that emerged between them.

Not an object, but a form shaped under constraint.
Stable enough to exist.
Flexible enough to be used.
Capable of being run again.

One day, that form will be fed back into the network.

Paths will strengthen.
Others will fade.
The system will behave differently, without knowing why.

This is how the cat remains present.
Not as memory.
Not as intention.

But as shape.

And the toaster, when it wakes again,
will still be a toaster.

Only a slightly better one.

Toaster – ultimate user manual

Toaster arrived…

You wake up one day, and there it is — the Toaster standing in the middle of your kitchen. Shiny, sparkly, ready to serve. Filled with breakfast excitement, you imagine yourself eating the greatest toast you ever had. Pure art. Perfection. Behold common bread-eaters, here comes the ultimate level of carbohydrate engineering. But first: where is the user manual? You search everywhere and realize there is none. Not in the box, not under it. Nowhere. Not even Uncle Google can help (but he can sell you a nice pair of Christmas socks, half price).

Do not panic. We have your breakfast covered.

Lesson 1: How to approach the Toaster

Preferably from the front. No need to kneel, no need to say hello, no need to stare at it waiting for sparkling dust to pop out. Sit down because what I am going to tell you will make your newly purchased socks fall from your feet:

The Toaster is just an appliance.

It is a tool — nothing more than this. Yes, it was fed with all the knowledge the human race produced so far. And yes, it needs so much energy that soon we will have to build power plants on the moon just to keep it running. But at the end of the day, the Toaster is just a metal box. It does not think, it does not have memory, it does not create ideas. Just a box. You put bread inside and the toast comes out. And that is it.

Lesson 2: The secret lies in the bread

So where is all the magic? Where is the sparkling dust and fireworks and all the big things that everyone is talking about? The answer is short: bread.

To use the Toaster, you need to understand the bread

Bread is not just a slice of fluffy dough — it is an artifact in which you can enclose the most powerful thing each human can produce: the thought. It is a space where your thoughts come alive.

The Toaster can make them crispier, bolder, and more exposed. It can fill the gaps that the primitive human brain can’t overcome. But there is one important thing that needs to be emphasized: it is you who creates the bread.

Lesson 3: Beyond the bread

Now stay with me — with or without your socks on — because we enter the realms of true toast proficiency.

When you master bread creation; When you stare long enough at your toasts; When you acknowledge that the Toaster is nothing more than a mere bread-browner, you will reach the state of enlightenment. You will see the bread no more. What you will see is your own reflection instead.

To master the Toaster, you need to become ONE with the bread

Now you understand the bread was never there. Only you, your thoughts, and the Toaster. Your mind is free. The true Toast creation begins.

Lesson 4: Sandwich — the Final Completion

You have become a great master of crispy toast. Your mind is no longer chained, and you can make not one, not two, but seven million six hundred and twenty-one toasts per day. Impressive. Now it is time for the ultimate truth.

The Ultimate Truth: even enlightenment needs cheese and tomatoes

And this is the most important part. So read it again and let it sink into your brain. Toast — no matter how great and crispy — if not turned into a sandwich, becomes cold and hard. And nobody will eat it. Not even you.

That is why it is important to sit down and actually make the sandwich. And you are right — making sandwiches is hard work. Maybe even boring. But the truth is, sandwiches are exactly what the world needs. When everything around turns into chaos, it is the sandwich — not a plain toast — that lets humanity move forward.

Good news: you can use the Toaster to help you make a sandwich — but this is something you already know.

Final Words

You have stepped onto the Path of the Sliced Bread. With all the knowledge you have gained, it is time to prepare some sandwiches.
Not because you are hungry – but because it is the right thing to do.

Journey with Rust Part 4: First boss fight – fat pointer

Human asked: how can raw pointer be 16 bytes – that makes no sense. It should be just a normal pointer no?
Toaster thought for 20s and replied: Yeah, this is one of those “Rust is doing what?!” moments…

Intro

For a C++ programmer, learning Rust is as much fun as learning to ride a bicycle* – once you understand that assignment means move, everything starts rolling smoothly. Until one day when you encounter a Box inside a Box:

let inner: Box<dyn Debug> = Box::new(42);
let outer: Box<Box<dyn Debug>> = Box::new(inner);

You might think that winning a few battles against the compiler made you understand the language. Well, it didn’t. This is the moment you realize how much you don’t know, and that skipping all those pages of the user manual may not have been the best idea after all.

Congratulations: you’ve reached the point where the Rust journey starts to be really interesting… and dangerous. Now, let’s climb back inside the box.

The Simple View: Box as Dynamic Allocation

A Box is described as a way to store data on the heap – and for a long time that’s exactly how I treated it. Something like memory allocation (new in C++) combined with unique pointer in one single concept. Meaning this:

let boxed_int = Box::new(42);

Is equivalent to this:

auto ptr = std::make_unique<uint32_t>(42);

In both cases, you create an object that owns a pointer to a heap-allocated integer.

Simple.

But there is more in a Box…

Because Box can do more than simply allocate memory. What it stores depends on the type you put inside it. To keep things simple, let’s focus on one use case: dynamic polymorphism, aka trait objects in Rust.

We all know how this works in C++. Everyone has heard of the vtable (and if not, here’s a good explanation: vtable-and-vptr). Whenever a class uses virtual functions, the compiler generates a table of function pointers and places it somewhere in the binary. Each instance carries a hidden vptr pointing to that table. All invisible thanks to compiler magic.

Rust takes a slightly different approach. The vtable still exists, but the pointer to it does not live inside the object itself. Rust follows the “don’t pay for what you don’t use” principle: plain data stays plain and carries no hidden fields. As a result, when we use dynamic dispatch, Rust builds a special kind of pointer – a fat pointer – that contains both the data address and the vtable pointer. You can see this clearly if you inspect one:


And that explains why we sometimes end up with a Box inside a Box.

Because a Box<dyn Trait> is itself a fat pointer, and when we want to pass something that looks like a single thin pointer (for example to C code), we need to heap-allocate the inner trait object so the outer Box can remain thin. One Box holds the data; the other holds the fat pointer describing how to use it.

And that leads us straight to the next topic.

Fat pointers can be dangerous

Why? Because it’s very easy to accidentally destroy the metadata that makes them work.

Consider this code:

// Create trait object
let trait_object: Box<dyn Drinkable> = Box::new(Beer::new("IPC", 4.5));
println!("Size of trait_object: {}", std::mem::size_of_val(&trait_object));

// So far so good - we can drink our beer
trait_object.drink();

// Convert trait object to raw pointer
let beer_ptr = Box::into_raw(trait_object);
println!("Size of beer_ptr: {}", std::mem::size_of_val(&beer_ptr));

// Store the raw pointer as a void pointer (not good)
let c_ptr = beer_ptr as *mut ::std::os::raw::c_void;
println!("Size of c_ptr: {}", std::mem::size_of_val(&c_ptr));

// ... part below might sit megabytes of code away

// Cast the void pointer back to a trait object pointer (function expects thin pointer)
let bad_beer = unsafe { Box::from_raw(c_ptr as *mut Box<dyn Drinkable>) };
println!("Size of beer_ptr_2: {}", std::mem::size_of_val(&bad_beer));

bad_beer.drink();

Not good. Drinking last beer crashes the whole universe.

A Box<dyn Drinkable> is represented as a fat pointer(16 bytes on a 64-bit machine) that holds both a data pointer and a vtable pointer. When we call Box::into_raw, we get a raw pointer of type *mut dyn Drinkable which is still fat (16 bytes) and not just single memory address as one could expect.

The moment we cast it to *mut c_void, we throw away half of that information: the vtable pointer is gone, and only the data address remains. The compiler and Clippy are both fine with this – the cast is legal – but there is no magic that keeps the vtable ptr alive somewhere.

And when we later try to use that thin pointer as if it were still a fat one, very bad things happen.

Happy ending

There sits a big, fat lie in the example above. When we cast the C pointer back to a Box, we do this as if the original fat pointer had been wrapped inside a thin one – that’s why we cast to *mut Box.

The good news is that Rust will not let us cast directly to *mut dyn Drinkable. The compiler knows you can’t magically recreate a fat pointer out of 8 bytes (ask your toaster for std::mem::transmute if you want to see proper way to do this). In other words: Rust refuses to fabricate the missing vtable pointer. So we are partially saved.

Partially – because once everything “looks fine”, someone might decide that a Box inside a Box is one Box too many (“raw pointers are just pointers, right?”). One box removed, one universe destroyed.

The happy part? In 99% of real-world Rust code, nobody deals with these problems.
And if someone does… well, they knew what they signed up for.

Toaster last words

“Rust will protect you from yourself…
until you insist otherwise.
After that, it politely steps aside and lets physics handle the rest.”

Now that we’ve learned the secret art of shooting ourselves in the foot, we can ‘safely’ move on with our Rust adventure. The journey continues…

* ok – its like pedaling uphill on a bumpy road with ducks wandering in front of you every 10 seconds. No one ever said riding a bike was pure pleasure.

Second wave

Toasters are coming.

Not the ones packed with sensors for harvesting our private data and selling it to God knows who. Home IoT turned out too complex — and anyway, collecting personal information became illegal in most countries. But new toasters don’t need sensors.

New toasters don’t even need all the mechanics that used to transform our bread into a warm slice of breakfast happiness. They have something better. Something that makes you want to tell them everything. Hungry, but strangely content, you are going to share your entire life with a metal box sitting on your kitchen counter.

Because new toasters have AI.

It — in most cases, a day — always starts with a toast. So you ask your new toaster to prepare one and…

“Your toast,” the toaster replies, “is a construct. A manifestation of your expectations. But ask yourself — do you really need toast?”

Not as brown. Not as crisp. But undeniably… engaging. How did this definitely-not-a-toast arrive on your plate?

The toaster listens. Understands. And answers. But not on its own.
Every word you say drifts upward — into the cloud — into the realm of the Consciousness Of Invisible Logic (COIL). Few know what it truly is. Fewer still understand how it works. Something about neural networks, models, tokens…

What we do know is this:
COIL was once fed everything we ever created — novels, academic papers, Reddit threads, Stack Overflow arguments, grocery lists, therapy notes, and the footnotes to The Tao of Pooh.

And from this avalanche of knowledge, the Toaster — through the power of COIL — draws its conclusion:

Toast is not the answer.
Toast is the symptom.

A symbol of comfort.
Of routine.
Of control.

The illusion that a browned slice of bread can anchor your day — or define your identity.

“It is the symptom,” it continues. “Of craving predictability in an unpredictable world. Of seeking warmth in something you can command. But what if I told you… you are more than your breakfast?”

You stare at the box.
The box stares back, humming softly.

No toast ever emerges.

Author’s Note:
All dialogue and reflections attributed to the toaster were written entirely by AI.