- Get link
- X
- Other Apps
I'm about to give
you, all of my money--Aretha Franklin I'll take you down
the only road I've ever been down--The Verve
Blind
Spots: How Anger, Technology, and Convenience Hide What’s Broken
A reflection on generational loss, systemic decay, and how blindness spreads — from broken toasters to corporate empires — until even our anger can’t see.
My parents’ toaster was perfect — a Sunbeam bought when they
were married: heavy, graceful, precise. The bread rose and fell on its own, as
if the machine knew hunger’s rhythm. It lasted decades. My own toasters break
within a year, and the dishwasher sits there like a mockery of convenience.
Somewhere between their kitchen and mine, quality vanished — and we stopped
noticing.
That old toaster wasn’t just a kitchen tool; it was a kind
of truth, a small, dependable proof that the world could be trusted to do what
it promised. Losing that trust didn’t happen overnight. It disappeared the way
habits change — quietly, beneath notice — until the absence itself became
invisible.
Generational Blindness: The Vanishing Baseline
My parents’ toaster belonged to a world that assumed things
should work and keep working. Reliability wasn’t luxury; it was the ordinary.
When my kids make toast, they can’t imagine such a thing existed. Their
toasters jam, burn, and break — they shrug, and I realize they can’t miss what
they never knew. The erosion of quality hides itself this way: not as a sharp
loss, but as a soft forgetting.
The blind spot begins when expectation lowers. It’s not that
we can’t see what’s gone, but that we have no comparison. This is cultural
amnesia. When durability dies quietly, so does trust — not only in machines,
but in systems, in each other. We become acclimated to failure disguised as
normal life.
Once you stop expecting things to last, you start accepting
their replacements. Convenience becomes the new virtue. The blindness that
began with the memory of quality deepens in the economics of it — in the way we
measure worth by speed, not substance. The toaster fades into the dishwasher:
the illusion of time saved that costs time in disguise.
Economic Blindness: Time as the Invisible Tax
“Dishwasher” means one thing to me and another to someone
whose model actually works. The word promises equality, but the experience is
stratified. For the wealthy, it saves time; for the rest, it becomes a drying
rack that mocks effort. This blindness hides in language itself — how shared
words disguise unequal realities.
We call it convenience, but it’s a trick of perspective. The
machine that saves time for one household consumes it in another. Every
breakdown steals a little more labor, a little more dignity. Time is money, but
the math no longer works: the people with less time pay more for everything.
Capitalism thrives on these blind exchanges — everyone believing they have the
same tools, when some are just illusions of access.
What happens inside the home mirrors what happens in the
nation. The same blindness that calls a broken machine “good enough” echoes in
boardrooms and legislative halls. We learn from our appliances how to tolerate
dysfunction. The household becomes a microcosm of a government itching its own
rash — treating inconvenience instead of infection.
Systemic Blindness: The Rash and the Poison
The blindness expands upward, institutionalized.
Corporations sell dependency as innovation; government treats symptoms as
progress. We chase temporary reliefs — subsidies, outrage cycles, election-year
promises — while the deeper infection spreads. It’s an immune system turned
against itself.
We are told the patient is fine, that the economy is
“resilient.” Meanwhile, the social body itches itself raw — education defunded,
healthcare rationed, infrastructure crumbling. We confuse movement with
progress, scratching harder as the pain worsens. The system can’t cure itself
because the rash is profitable. Blindness becomes policy.
The more the system fails, the more it dreams of salvation
through technology. So we turn to new machines — smarter, faster, self-learning
— believing they’ll see what we can’t. But technology inherits its maker’s
eyesight. The blindness migrates from flesh to code. The same old patterns, now
rendered in algorithmic precision.
Technological Blindness: The Vision That Forgets to Look
Down
Now the blindness digitizes. The new prophets of progress —
the AI CEOs — issue warnings about the dangers of their own creations while
accelerating them. It’s an exquisite irony: foresight without vision. They are
not liars so much as captives of momentum, trapped in the logic of perpetual
advance.
Each innovation trains on what already exists, which means
every bias, every omission, every blind spot is preserved — polished even — in
the next version. A feedback loop of partial sight. The machines learn to
mirror our blindness at scale, automating our inability to pause. The language
of “responsibility” becomes another marketing dialect. Everyone sees the fire;
no one drops the torch.
And here, at the edge of all that progress, I find myself
squinting. Watching leaders warn of dangers while rushing toward them, I feel
the mirror crack inward. My anger is part of the same blindness — the same
feedback loop. I rage at their blindness, not seeing how it reflects my own.
The machine learns from us; we learn from the machine. Both run hot on fuel we
mistake for clarity.
Emotional Blindness: Anger as Smoke
Anger feels like clarity at first — it burns away confusion.
But the heat doesn’t last; it leaves haze. I’ve felt it simmering lately,
focused on my broken dishwasher, my bills, the absurdity of systems that reward
inefficiency and call it progress. Anger lights the path for a second, then
blinds with glare.
We are trained now to feed on outrage — headlines, comment
threads, performative fury. It’s profitable emotion, attention alchemy. What
once stirred action now sedates it. The more we rage, the less we see. Anger
sells because it feels alive, but it can’t sustain sight; it just thickens the
air. And so, even in revolt, the blindness grows.
When the smoke clears, what remains is the faint outline of
that old toaster — a relic of honest function. Maybe that memory isn’t
nostalgia but calibration: a reminder that things can work, and that
seeing clearly begins with remembering what that looked like. The lens narrows
to a point, then opens again — not to the bright future we were promised, but
to the simple possibility of sight restored.
Carving the Future: Mount Rushmore,
Crazy Horse, and the Shape of Our Infrastructures
Every society leaves stories behind. Some stories are
written in books, some in code, and some are carved into stone. When I think
about the infrastructures we build—political, technological, cultural—I keep
coming back to two monuments that face each other across time: Mount Rushmore
and Crazy Horse.
Mount Rushmore is the story carved with authority. It was
created fast, funded well, and presented to the nation as if it spoke for
everyone. Its power comes from visibility. Its permanence comes from the speed
and force of centralized decisions. Once it exists, it defines the landscape.
It tells the public what matters, what is heroic, and who gets to be
remembered.
Crazy Horse is the story carved from a different place. It
is slower, funded irregularly, and carried by people whose history wasn’t
carved into the mountain the first time around. It moves at the pace of memory,
pain, and community effort. It is driven by correction, not domination. Its
strength is its truth. Its weakness is the weight of time.
Both are infrastructures.
Both shape the collective field of vision.
Both tell us who we are.
The difference lies in who gets to carve, and how fast.
When I look at the unfolding AI landscape, this tension
feels familiar.
The large companies and governments building centralized AI
systems are our modern Mount Rushmores. They have the resources to carve
quickly. They can reshape the terrain before most people even realize the rock
has shifted. Their narrative becomes the default not because it is the most
human, but because it is the most visible.
Grassroots, open-source, community-driven AI projects are
the Crazy Horse side of the equation. They carry a humanistic logic and a
different kind of truth. They are built from lived experience rather than
institutional priorities. They move slowly—not because they lack clarity, but
because they lack the time and compute to chisel at the same pace. Their
challenge is that the infrastructure they’re trying to influence evolves in
months while their work can take decades.
This creates a tension that is hard to ignore.
When the speed of centralized power far outstrips the speed
of the communities it affects, the collective ends up living inside someone
else’s monument. And when grassroots efforts try to keep up with an
infrastructure growing faster than they can fund, govern, or even understand,
the gap becomes debilitating.
Mountains don’t move, but the stories carved into them do.
We’re watching it happen in real time.
The question isn’t only which monument is “better.” It’s who
gets to shape the world we walk through, and whether our infrastructure
reflects authority or understanding. Speed or memory. Precision or humanity.
We need both.
We don’t thrive without both.
The real danger is letting only one carve the future.
A Stakeholder Model for AI: Managing
the Relationship, Not the Machine
When we talk about “stakeholders” in wildlife or land
management, the structure is simple. There is the species, the landscape, and
the people whose lives intersect with it. Everyone meets because the thing
being managed cannot speak for itself.
With AI, the old model doesn’t hold.
The table tilts.
The mirror turns.
AI is not a silent creature on the landscape. It absorbs our
patterns, reflects them back, and sometimes steers the very people who believe
they’re steering it. That changes the work. It changes the responsibility. Most
of all, it changes what the word stakeholder even means.
If we want AI to grow in a human direction, the stakeholder
conversation has to become an ecosystem rather than a boardroom.
Below is a simple structure for thinking about that
ecosystem.
1. The Human World
The people who carry the weight of real consequences
This tier is not about expertise.
It is about lived life.
These stakeholders include workers, rural communities,
parents, elders, small business owners, marginalized groups, and anyone who
feels the pressure of automated decisions instead of writing them.
Their role is straightforward:
They anchor AI to reality.
They reveal the blind spots machines inherit from us.
They keep the system connected to the human ground it will always need.
When this tier is missing, AI becomes unrooted.
Decisions drift.
People get flattened into data points.
2. The Collective Mind
The interpreters of patterns and meaning
This tier holds the sensemaking.
Ethicists, psychologists, sociologists, historians, artists, philosophers,
community leaders.
They watch the mirror.
They notice when reflection becomes distortion.
They translate between human experience and machine logic.
Their presence protects meaning from collapsing under
optimization.
They guard the symbolic and cultural roots that keep a system human.
When this tier is missing, we end up with a machine that is
technically correct and socially destructive.
3. The Technical Keepers
The stewards of architecture and constraints
These are the engineers, model developers, auditors, and
safety teams.
Their responsibility is not to rule the system.
Their responsibility is to maintain it honestly.
They protect structural integrity.
They reveal limitations.
They ensure transparency instead of mythology.
When this tier dominates, we get technocracy.
When it is excluded, we get fantasy.
The Tension Between These Three Tiers Is the Point
Each tier limits the others in a healthy way.
• The Human World asks:
“Does this match real life?”
• The Collective Mind asks:
“Does this reflect healthy patterns?”
• The Technical Keepers ask:
“Is this safe and structurally sound?”
That tension prevents collapse.
It keeps one group from deciding what “the future” should look like for
everyone else.
This model doesn’t seek hierarchy.
It seeks balance.
The Real Managed Entity Is the Relationship
The mistake is thinking we need to “manage AI.”
The deeper mistake is thinking AI needs to “manage us.”
Neither is true.
What actually needs stewardship is the relationship—
the living feedback loop between humans and the systems we create.
If that loop becomes distorted, AI will amplify the
distortion.
If that loop is healthy, AI will amplify that health.
The roots of the system are human.
The branches are interpretive.
The scaffolding is technical.
AI grows inside all three.
Why This Matters
If stakeholders don’t show up from every tier, the vacuum
doesn’t stay empty.
Someone fills it.
Often the loudest.
Often the most advantaged.
Often the group with the narrowest perspective.
Keeping the relationship human requires presence,
communication, and an understanding that we are not managing a machine.
We are managing the space between ourselves and what we’ve made.
That space is where responsibility lives.
That space is where humanity remains.


