Failure Mode 1 — Sheep in the Ocean

Ever seen a whale pretending to be a grass field? Or a sheep swimming in the ocean?
Of course not.
Some things just don’t fit.

But the software world is different. Here the four-eyed sheep can fly in space and no one will care. Until the moment it hits the ground.

“Oh my – this guy is talking about sheep and whales again…”

Relax. No whales this time. Instead, let me show you two architecture failure modes
and one solution they both quietly ignore.

When execution models impersonate each other, complexity leaks. The fix is a real boundary.

For our examples we will use Modbus an ancient way of exchanging data between machines—and one that still refuses to be replaced. Each device exposes a set of registers, read and written in a fixed, periodic loop.

Simple. Brutal. Effective.

Scenario 1

We start clean. A Modbus system runs in a single deterministic loop:

read state -> process -> write state -> repeat


One day, a new requirement appears: the Modbus data must be sent elsewhere using a modern RPC protocol.

Without much thinking we start adding the communication logic into the main control loop. Suddenly alien constructions start to appear – retry counters, timestamps, acknowledge signals. Before we know we create a full-fledged message broker inside our simple loop.

Complexity grows.

Scenario 2

Now the opposite.

We start with a clean, event-driven environment. Requests, responses, handlers, queues. Perfect.

We add Modbus handling. “Easy,” we think.
“We’ll poll registers and emit events on change.”

It works… until signals start changing faster than the event system can digest.
Events pile up, updates get dropped or reordered, information is lost

And the more we try to solve it the more complex system becomes.

What happened?

In both cases we made the same fundamental mistake – we tried to bend the problem we were solving so it fits architecture that was already in place. We ignored the quiet signal saying:
“This does not belong here.”

There’s a simple rule—very much in the spirit of model-driven design:

Software should model the domain and its execution semantics.

For each domain, we must choose abstractions that fit naturally—without distortion.

The solution: a boundary with translation

The solution isn’t a smarter loop or a better event system.
It’s a boundary.

Keep each concern in its native execution model—and translate only at the edge.

On one side, a deterministic polling loop:

  • Read registers
  • Process state
  • Write registers
  • Repeat at a fixed rate

On the other side, an event-driven system:

  • Requests
  • Handlers
  • Queues
  • Backpressure

The boundary translates stable state from the deterministic world into meaningful change for the event-driven world.

No retries in the loop.
No event queues pretending to be registers.
No execution model impersonating another.

Each side runs the way it was designed to run.

Getting there isn’t a technical trick—it’s a change in how you think about the problem.

Not:

“What’s the fastest way to implement this feature?”

But:

“What is the domain—and how does it naturally execute?”

Follow that, and things fall into place.

Sheep stay on grass.
Whales stay in the ocean.

And systems quietly become what they’re supposed to be.

Software Architecture and a Cosmic Whale

Has Anyone Seen My Architecture?

There are countless definitions of software architecture.
Some emphasize decisions, others structures, others “the important stuff,” or whatever is hardest to change. Read enough of them and architecture begins to feel like something that slips through every classification—a creature everyone describes differently, yet no one seems to have seen.

And yet, this creature clearly exists. No one doubts that.
We recognize it by its effects: slow delivery, bugs that refuse to die, changes that feel far riskier than they should, systems that push back against even the smallest improvement.

The Mysterious Creature

One might try to exercise the imagination—to picture something that lives partly in code and partly in our heads. A multidimensional entity, not bound to a single moment in time, but stretched across the full span of its existence. Shaped by past decisions and external forces, while simultaneously guiding—and constraining—what changes are possible next. With enough effort, one might even convince oneself of having seen it.

But that is not the point.

We are software developers. Our job is not to chase mystical creatures, but to solve problems. We have deadlines. Features. Things that must work. We have bugs that reliably appear at 3 a.m.

What actually matters are the long-term consequences of change:

  • Whether, given what we have today, we can meet business requirements tomorrow.
  • Where to look when things begin to break apart.
  • Whether deleting a piece of code is safe—or the first step toward disaster.

Chop It!

To reason about architecture, we do what physicists do with spacetime—a similarly ungraspable monstrosity. If you are still holding on to some animal-like mental picture of architecture, now is the time to let it go. Things are about to get drastic.

We are going to slice it.

The axis we choose depends on what we want to understand, and which trade-offs we want to bring into the light.

Boundary axis (Context diagram)
What is inside the system, what is outside, and who depends on whom.

Time axis (Architecture Decision Records)
How the system arrived at its current shape.
Which decisions were made under which constraints—and which alternatives were rejected.

Runtime behavior axis (Sequence diagram)
How work flows through the system while it is running.
Who calls whom, in what order, and where latency or failure can occur.

Infrastructure axis (Deployment diagram)
How the system maps onto physical or virtual resources.
What runs where, what can be deployed independently—and what cannot.

Change axis (Module or service diagram)
How the system tends to evolve over time.
What changes together, what should not, and where change is expensive.

There are many more possible slices.

But the important thing is this: none of these projections is the architecture.
They are views—showing relationships, revealing trade-offs, and giving your brain something it can actually navigate.

The End Game

The goal of the architecture game is not to catch the mysterious whale.
Those who try usually end up with piles of documents that age faster than the code—and quickly become useless.

The goal is to deliver. To know which axes to use at any given moment.
To move comfortably across different projections, and to predict the consequences of change—whether we introduce it deliberately or it is forced upon us. To prepare for disasters and to minimize the impact radius when they arrive.

One who knows how to play the game can deliberately evolve the system.
One who does not will eventually be eaten by code-degradation crabs.

Cognitive Environments in the Age of AI

Pre-intro

One day a human asked: What now?

The toaster did not answer.
A well-behaved toaster does not take initiative.

So the human sat down and thought.
Not about answers, but about thinking itself — why it sometimes works, why it often doesn’t, and why acceleration seems to make both outcomes more extreme.

The toaster was there.
It listened.
It reflected.
It did not interfere.

This text is what emerged.


Intro

For a long time, understanding how we think was optional.
Interesting, useful, sometimes life-changing — but not required.

That is no longer true.

In an AI-accelerated world, cognitive literacy is becoming as fundamental as reading or writing. Not because humans are being replaced, but because the conditions under which human thinking works are being radically altered.

This article is not a theory of the mind.
It is a practical model for understanding why modern software work environments so often break deep thinking — and why AI often magnifies the problem instead of fixing it.


Thinking Is State-Based, Not Continuous

If you write software, you already know this implicitly:

Some days you can reason clearly about systems.
Other days you can barely hold a function in your head.

This is not about intelligence, discipline, or motivation.
It is about cognitive state.

Human thinking is not continuous.
It shifts modes depending on context.

When you strip it down, two forces remain:

  • Threat — is there something that demands immediate response?
  • Direction — is there a meaningful question guiding attention?

This is a deliberate simplification.
Not complete — but useful.


The First Gate: Threat (The “Tiger”)

Before the brain asks “What should I work on?”, it asks something more basic:

“Is it safe to think right now?”

Historically, that meant predators.
Today, the “tiger” is environmental.

In software work, common tigers look like:

  • constant Slack or email interruptions
  • unclear expectations paired with evaluation
  • urgency without explanation
  • priorities that change mid-implementation
  • work that may be discarded or rewritten after review

None of these are dramatic on their own.
But cognitively, they all signal the same thing:

“Don’t go deep. Stay alert. Be ready to react.”

When the brain detects threat, it does the correct thing.

It shifts into survival-aligned modes:

  • fast reaction
  • narrowed attention
  • short time horizons

This is the state where:

  • you fix bugs quickly
  • you ship something that works now
  • you respond efficiently

This mode is productive — for viability.

What it cannot sustain is:

  • open exploration
  • coherent system design
  • long-horizon reasoning
  • meaning creation

No amount of willpower overrides this.
If the environment keeps signaling danger, the brain responds correctly.


Safety Is Necessary — But Not Enough

Remove the tiger, and higher cognition becomes possible.

But safety alone does not guarantee useful work.

When threat is low, three outcomes are possible:

  1. meaningful work
  2. rest or incubation
  3. drift

Drift is what happens when the brain is safe but undirected.

You recognize it as:

  • scrolling
  • shallow refactoring
  • consuming information without integration
  • feeling busy without progress

This is not a character flaw.
It is entropy.

The difference between meaningful work and drift is not discipline.

It is direction.


Direction Sustains Thinking

Direction is not pressure.
Direction is not busyness.
Direction is not motion.

Direction is simply:

an active question or constraint held in mind

Examples engineers recognize:

  • “What is the simplest architecture that will still scale?”
  • “Where does this abstraction actually belong?”
  • “What problem are we really solving for the user?”

Without direction:

  • safety decays into drift
  • time dissolves
  • effort feels pointless

With direction:

  • focus emerges naturally
  • cognition sustains itself
  • work feels coherent

Direction is the stabilizer.


Two Productive Modes Without Threat

When safety and direction are both present, two meaningful modes become available.

Co-Creation (Driven Exploration)

Used when the outcome is not yet known.

Characteristics:

  • ambiguity is tolerated
  • evaluation is suspended
  • the question is: “What should exist?”

Examples:

  • early system design
  • architecture sketching
  • strategy exploration
  • reframing a technical problem

Craft (Committed Execution)

Used when the outcome is defined.

Characteristics:

  • constraints are accepted
  • quality and correctness matter
  • progress is measurable
  • the question is: “How do we make this real?”

Craft still involves exploration — but locally, within boundaries.


Productive Modes Under Pressure

Some threat-based modes are genuinely useful.

With threat and direction, the brain enters compression:

  • options collapse
  • heuristics dominate
  • decisions commit fast

This is essential during:

  • incidents
  • tight deadlines
  • production outages

But compression trades depth for speed.
Used continuously, it destroys coherence.


The Full Picture (Condensed)

  • Threat + Direction > fast viability (compression, response)
  • Threat + No Direction > shutdown or burnout
  • No Threat + Direction (open) > co-creation
  • No Threat + Direction (defined) > craft
  • No Threat + No Direction > drift

No cognitive mode is good or bad.
Each is useful in the right context — and harmful only when it outlasts that context.


Why This Matters More in the Age of AI

AI accelerates everything:

  • output
  • feedback
  • decision cycles
  • content production

It also magnifies environments.

In poorly designed environments:

  • AI increases noise
  • compression becomes default
  • drift becomes effortless
  • coherence collapses

In well-designed environments:

  • AI amplifies craft
  • frees cognitive capacity
  • supports exploration under direction

AI does not remove the need for human thinking.
It exposes how poorly we often protect it.


Closing

This is a way to reason about how thinking actually behaves in real environments — and why it so often breaks under pressure and acceleration.

Safety enables thinking.
Direction sustains it.
Different cognitive modes optimize for different outcomes.

In an AI-saturated world, understanding this is no longer optional.
It is becoming basic cognitive literacy.