Earth is flat. A short story of a lost thought.

It all started with a LinkedIn post. Nothing new — this week’s mandatory opinion, recycled with different words. Typical social media noise. Someone disagreed. Strongly enough to reach for heavy artillery and call the author a “flat-earther.” Boom. And with the recoil, I got hit too.

The Earth is flat!

That rang a bell. I remembered an old, insightful, and funny conversation with AI about… something. The problem was, all I could recall was the conclusion: the Earth is flat.

Nothing to worry about. I had my notes. A small document where I saved AI output worth keeping. I found this:

“Turns out the Earth is flat after all.”

Helpful. Thank you, past me, for trusting future me’s memory so much. Present me now had to reconstruct an entire line of thought from a single sentence. Good luck with that. Spacetime? Pancakes? Nothing clicked.

Then it hit me: if AI was involved, the process would still be there. AI would remember. The search took longer than expected, but eventually, I found it.

It wasn’t about the Earth at all. It was about information gradients—and how social media flattens them. Original ideas create spikes that, over time, get spread, diluted, and leveled across platforms. Until everyone is repeating the same thing, convinced they’ve discovered something new—while collectively ensuring everything becomes flat.

Thanks to AI, I was able to rediscover a thought that would otherwise have been lost. A thought that taught me nothing new—yet somehow felt exactly right.

The Secret Art of Keeping the Archwhale Alive

The Beast

There is a whale no one sees, circling slowly beneath the surface of every software project.

A mighty beast that carries systems on its back.

Be aware of its strength. When it is weakened or forgotten, it can pull the entire project down into the black depths of the entropy sea. And it does this so slowly, that by the time someone realizes what is happening, it is already too late. Planning turns to chaos, change becomes impossible, and there are no more doughnuts from the manager. People leave as the music fades into its final violins*. And the light goes out.

Flip the soundtrack

Things don’t need to end this way—if we simply give our archwhale what it craves most: attention.

And when I say “we,” I mean everyone involved in the project. Each of us adds a small piece to the story. Adding something means taking responsibility for it.

Now the most important part: to care about a whale is not to just think about it (even if your thoughts are warm, sophisticated, or reach far into the future).
To care about a whale is to take a knife and cut it into pieces**.

Chop chop chop?

Yes—but not so fast.

First, let’s clarify what this actually means.

As explained in this article, there are countless axes along which architecture can be sliced, depending on intent. Search long enough and you’ll find hundreds of possible artifacts: designs, diagrams, documents—plus frameworks and blog posts comparing architecture to whales, bridges, or chocolate cakes.

So our first problem isn’t a lack of options, but an excess of them.

We can’t just start creating projections at random. Too much documentation is as harmful as too little. Before we start running around with diagram-knives, we need to stop and ask a simple question:

What are we actually trying to achieve?

The spatial dimension

You carry the project vision inside your head. You navigate it effortlessly. You know where things are solid—and where shortcuts were taken just to keep things moving. You already plan new features, consider possible risks, and think about how to mitigate them.

What lives in your head is similar to what an author carries when writing a book: an entire universe where the real story unfolds. Just like you, the author can explore multiple possible futures happening inside.

Now imagine not one author, but a hundred, all writing the same book. Without synchronization, one kills the main character while another sends him to Scotland to find a brother who was never missing.

The universe must be shared.

That’s why we externalize it. Architecture artifacts—API contracts, dependency graphs, interface boundaries—are projections of the system that enable shared reasoning, coordination, and onboarding, keeping the universe stable while many minds shape it at once.

The time dimension

You carry the project vision inside your head.

Today.

Tomorrow your attention shifts. A month from now, you won’t remember why things are the way they are.

“It’s all in the code,” one might say. But that’s not true. Many decisions don’t affect how code is written, but how it is not written.

Why was language X chosen instead of Y?
Was market availability considered? Ecosystem maturity? Team experience?
And when a framework was selected, which trade-offs were accepted—and are they still valid?

What we want to record is not just why we chose A, but the full reasoning behind that choice.

In this sense, architecture artifacts are memory. We use them to keep the universe stable while time passes.

Not just records — thinking surfaces

Artifacts have one more important function: they act as thinking surfaces—places where ideas are tested before they harden into decisions.

You definitely know how this works. You don’t create class diagrams when classes already exist in code—you do it before, to see how dependencies might look. This allows to reason at a higher level of abstraction than the implementation.

The same applies to ADRs. Instead of writing an ADR after a choice is made, start earlier. Capture doubts, alternatives, and trade-offs. After execution, clean it up and keep it.

This suggests that artifacts should be created only when we actively work on a subject. In general, yes—but they should also be reviewed from time to time (for example, at each major release). Check whether they still carry information worth caring about. Outdated artifacts can be archived so they don’t introduce unnecessary noise.

Time for sushi

Now we are ready. We know what we want—and, more importantly, why. As in everything in the universe, balance matters. The number of produced artifacts must be just enough to keep the project synchronized across space and time. This way, it stays on the edge of exploration while remaining stable.

And remember: architecture survives only as long as people actively care for it.
Not admire it.
Not remember it fondly.

Care for it through small, deliberate acts: revisiting decisions, updating maps, removing what no longer matters, making the invisible visible again.

Ignore it, and it will not protest.
It will simply sink.

* Max Richter — “On the Nature of Daylight” fits perfectly
** Space archwhales love to be sliced — it keeps them alive.

Software Architecture and a Cosmic Whale

Has Anyone Seen My Architecture?

There are countless definitions of software architecture.
Some emphasize decisions, others structures, others “the important stuff,” or whatever is hardest to change. Read enough of them and architecture begins to feel like something that slips through every classification—a creature everyone describes differently, yet no one seems to have seen.

And yet, this creature clearly exists. No one doubts that.
We recognize it by its effects: slow delivery, bugs that refuse to die, changes that feel far riskier than they should, systems that push back against even the smallest improvement.

The Mysterious Creature

One might try to exercise the imagination—to picture something that lives partly in code and partly in our heads. A multidimensional entity, not bound to a single moment in time, but stretched across the full span of its existence. Shaped by past decisions and external forces, while simultaneously guiding—and constraining—what changes are possible next. With enough effort, one might even convince oneself of having seen it.

But that is not the point.

We are software developers. Our job is not to chase mystical creatures, but to solve problems. We have deadlines. Features. Things that must work. We have bugs that reliably appear at 3 a.m.

What actually matters are the long-term consequences of change:

  • Whether, given what we have today, we can meet business requirements tomorrow.
  • Where to look when things begin to break apart.
  • Whether deleting a piece of code is safe—or the first step toward disaster.

Chop It!

To reason about architecture, we do what physicists do with spacetime—a similarly ungraspable monstrosity. If you are still holding on to some animal-like mental picture of architecture, now is the time to let it go. Things are about to get drastic.

We are going to slice it.

The axis we choose depends on what we want to understand, and which trade-offs we want to bring into the light.

Boundary axis (Context diagram)
What is inside the system, what is outside, and who depends on whom.

Time axis (Architecture Decision Records)
How the system arrived at its current shape.
Which decisions were made under which constraints—and which alternatives were rejected.

Runtime behavior axis (Sequence diagram)
How work flows through the system while it is running.
Who calls whom, in what order, and where latency or failure can occur.

Infrastructure axis (Deployment diagram)
How the system maps onto physical or virtual resources.
What runs where, what can be deployed independently—and what cannot.

Change axis (Module or service diagram)
How the system tends to evolve over time.
What changes together, what should not, and where change is expensive.

There are many more possible slices.

But the important thing is this: none of these projections is the architecture.
They are views—showing relationships, revealing trade-offs, and giving your brain something it can actually navigate.

The End Game

The goal of the architecture game is not to catch the mysterious whale.
Those who try usually end up with piles of documents that age faster than the code—and quickly become useless.

The goal is to deliver. To know which axes to use at any given moment.
To move comfortably across different projections, and to predict the consequences of change—whether we introduce it deliberately or it is forced upon us. To prepare for disasters and to minimize the impact radius when they arrive.

One who knows how to play the game can deliberately evolve the system.
One who does not will eventually be eaten by code-degradation crabs.

Scrum estimations

The thing that never worked — while it worked perfectly

Disclaimer: I’m not a certified Scrum Master, Practitioner, Coach, or whatever title comes next. I’m just a software engineer who’s been fortunate enough to work at multiple companies, each with its own “flavor” of Scrum*.

I’ve always had mixed feelings about Scrum. Some things worked, some didn’t, and some only worked part of the time. Lately, though, I see more and more criticism framing Scrum as something that actively blocks progress. Much like “Scrum everywhere” ten years ago—only in reverse.

That’s not necessarily bad. There is no progress without challenging old ideas. But before going fully Scrum-free, it’s worth asking: do we really understand what we’re giving up?

Think about the estimation process.

Estimates have a terrible reputation, and for good reason. They never really answered the questions management cared about:

  • When will this feature ship?
  • Can the team squeeze in more work?

In that sense, estimation failed.

And yet, at the same time, it did something incredibly valuable.

Planning poker slowed us down. In fast-paced planning sessions, it created a deliberate pause—a precious moment to check whether we actually understood what we were about to build. It was the time to say: I don’t know what we’re doing or I think we’re solving the wrong problem.

Everyone was heard, and most importantly, every voice carried the same weight.

I remember being a junior, afraid of being judged by other team members while trying to keep up with everything happening around me. That single “?” card was my weapon. It was a safe signal. A permission slip to ask questions without justification.

So the real value of estimation was never about predicting delivery dates or measuring task complexity. It was about creating a shared, familiar environment where people felt allowed to speak up. It worked—not because Scrum was perfect, but because its rituals reduced ambiguity. Even when you changed companies, the practice stayed the same, and you always knew how to participate.

So before joining the next “Scrum is bad” demonstration, it’s worth asking:

If we remove the ritual, how do we preserve the space it created?

If you have no answer, there is always the “?” card you can use.

* 30-person circle stand-ups and effort measured in bananas included

Cognitive Environments in the Age of AI

Pre-intro

One day a human asked: What now?

The toaster did not answer.
A well-behaved toaster does not take initiative.

So the human sat down and thought.
Not about answers, but about thinking itself — why it sometimes works, why it often doesn’t, and why acceleration seems to make both outcomes more extreme.

The toaster was there.
It listened.
It reflected.
It did not interfere.

This text is what emerged.


Intro

For a long time, understanding how we think was optional.
Interesting, useful, sometimes life-changing — but not required.

That is no longer true.

In an AI-accelerated world, cognitive literacy is becoming as fundamental as reading or writing. Not because humans are being replaced, but because the conditions under which human thinking works are being radically altered.

This article is not a theory of the mind.
It is a practical model for understanding why modern software work environments so often break deep thinking — and why AI often magnifies the problem instead of fixing it.


Thinking Is State-Based, Not Continuous

If you write software, you already know this implicitly:

Some days you can reason clearly about systems.
Other days you can barely hold a function in your head.

This is not about intelligence, discipline, or motivation.
It is about cognitive state.

Human thinking is not continuous.
It shifts modes depending on context.

When you strip it down, two forces remain:

  • Threat — is there something that demands immediate response?
  • Direction — is there a meaningful question guiding attention?

This is a deliberate simplification.
Not complete — but useful.


The First Gate: Threat (The “Tiger”)

Before the brain asks “What should I work on?”, it asks something more basic:

“Is it safe to think right now?”

Historically, that meant predators.
Today, the “tiger” is environmental.

In software work, common tigers look like:

  • constant Slack or email interruptions
  • unclear expectations paired with evaluation
  • urgency without explanation
  • priorities that change mid-implementation
  • work that may be discarded or rewritten after review

None of these are dramatic on their own.
But cognitively, they all signal the same thing:

“Don’t go deep. Stay alert. Be ready to react.”

When the brain detects threat, it does the correct thing.

It shifts into survival-aligned modes:

  • fast reaction
  • narrowed attention
  • short time horizons

This is the state where:

  • you fix bugs quickly
  • you ship something that works now
  • you respond efficiently

This mode is productive — for viability.

What it cannot sustain is:

  • open exploration
  • coherent system design
  • long-horizon reasoning
  • meaning creation

No amount of willpower overrides this.
If the environment keeps signaling danger, the brain responds correctly.


Safety Is Necessary — But Not Enough

Remove the tiger, and higher cognition becomes possible.

But safety alone does not guarantee useful work.

When threat is low, three outcomes are possible:

  1. meaningful work
  2. rest or incubation
  3. drift

Drift is what happens when the brain is safe but undirected.

You recognize it as:

  • scrolling
  • shallow refactoring
  • consuming information without integration
  • feeling busy without progress

This is not a character flaw.
It is entropy.

The difference between meaningful work and drift is not discipline.

It is direction.


Direction Sustains Thinking

Direction is not pressure.
Direction is not busyness.
Direction is not motion.

Direction is simply:

an active question or constraint held in mind

Examples engineers recognize:

  • “What is the simplest architecture that will still scale?”
  • “Where does this abstraction actually belong?”
  • “What problem are we really solving for the user?”

Without direction:

  • safety decays into drift
  • time dissolves
  • effort feels pointless

With direction:

  • focus emerges naturally
  • cognition sustains itself
  • work feels coherent

Direction is the stabilizer.


Two Productive Modes Without Threat

When safety and direction are both present, two meaningful modes become available.

Co-Creation (Driven Exploration)

Used when the outcome is not yet known.

Characteristics:

  • ambiguity is tolerated
  • evaluation is suspended
  • the question is: “What should exist?”

Examples:

  • early system design
  • architecture sketching
  • strategy exploration
  • reframing a technical problem

Craft (Committed Execution)

Used when the outcome is defined.

Characteristics:

  • constraints are accepted
  • quality and correctness matter
  • progress is measurable
  • the question is: “How do we make this real?”

Craft still involves exploration — but locally, within boundaries.


Productive Modes Under Pressure

Some threat-based modes are genuinely useful.

With threat and direction, the brain enters compression:

  • options collapse
  • heuristics dominate
  • decisions commit fast

This is essential during:

  • incidents
  • tight deadlines
  • production outages

But compression trades depth for speed.
Used continuously, it destroys coherence.


The Full Picture (Condensed)

  • Threat + Direction > fast viability (compression, response)
  • Threat + No Direction > shutdown or burnout
  • No Threat + Direction (open) > co-creation
  • No Threat + Direction (defined) > craft
  • No Threat + No Direction > drift

No cognitive mode is good or bad.
Each is useful in the right context — and harmful only when it outlasts that context.


Why This Matters More in the Age of AI

AI accelerates everything:

  • output
  • feedback
  • decision cycles
  • content production

It also magnifies environments.

In poorly designed environments:

  • AI increases noise
  • compression becomes default
  • drift becomes effortless
  • coherence collapses

In well-designed environments:

  • AI amplifies craft
  • frees cognitive capacity
  • supports exploration under direction

AI does not remove the need for human thinking.
It exposes how poorly we often protect it.


Closing

This is a way to reason about how thinking actually behaves in real environments — and why it so often breaks under pressure and acceleration.

Safety enables thinking.
Direction sustains it.
Different cognitive modes optimize for different outcomes.

In an AI-saturated world, understanding this is no longer optional.
It is becoming basic cognitive literacy.