Someone Always Decided

Every failure has a meeting somewhere upstream.

It might not look like much in hindsight. A calendar invite. A hallway conversation. A quick “yeah, that should be fine.” A Slack message with a thumbs-up. A spreadsheet column nobody questioned. A deployment that happened on a Friday because it always does.

But at some point, before the incident report, before the apology, before the finger-pointing, someone decided.

And later, always later, someone asks the same questions:

  • Who approved this?

  • Why did we think this was okay?

  • What were we assuming at the time?

  • Was this supposed to still be in effect?

Most of the time, there’s no good answer.
Not because people were careless.
Not because they didn’t care.

But because nobody wrote it down in a way that survived the moment.

That’s not a moral failure. It’s a human one.

Paperwork Is How Humans Survive Time

We like to pretend paperwork is bureaucracy. Something imposed from above by regulators, lawyers, or risk departments.

It isn’t. Paperwork is how humans externalize memory so it doesn’t collapse under pressure.

We’ve always done this:

  • Clay tablets for grain

  • Ledgers for debt

  • Contracts on napkins

  • Ship logs

  • Medical charts

  • Kitchen prep lists

  • Change logs nobody enjoys maintaining

None of these exist because people love rules. They exist because memory is unreliable, especially when stakes rise and time passes.

When things go well, paperwork feels pointless. When things go wrong, it’s the only thing left standing.

The Real Failure Mode Isn’t Bad Decisions

Here’s the uncomfortable truth:

Most disasters aren’t caused by obviously bad decisions. They’re caused by decisions that outlived the moment they were made.

Someone approved something under a set of assumptions that quietly stopped being true. The world changed. The system changed. The team changed. The risks changed. But the decision stayed in force: unexamined, unquestioned, slowly becoming invisible.

Temporary became default. Context evaporated. Ownership blurred. And then one day, someone asks why the system behaved the way it did, and nobody can reconstruct the answer.

AI Didn’t Create This Problem

AI just makes it impossible to ignore.

AI systems accelerate everything that already made decision-making fragile:

  • Scale

  • Speed

  • Diffusion of responsibility

  • Turnover of people

  • Opacity of reasoning

  • Drift over time

With AI, decisions don’t just affect one person or one moment. They propagate. They compound. They get embedded into workflows, interfaces, and expectations.

And crucially: AI decisions are easy to reuse long after their original context is gone.

That’s not a safety problem. It’s a memory problem.

Someone Had to Say Yes

Behind every AI system in use, every deployment, every expansion, every configuration change, there was a moment where someone said yes.

Yes, this is acceptable. Yes, this is within bounds. Yes, we can rely on this. Yes, let’s move forward.

That moment matters more than most of the surrounding process. Not because the decision was perfect, but because it was the moment responsibility crystallized.

If you can’t point to that moment later, clearly, confidently, without reinterpretation, then everything downstream becomes a matter of opinion.

Decisions Need Owners, Not Narratives

One of the quiet ways accountability dissolves is when ownership turns into prose.

“The team decided.”
“It was approved by leadership.”
“This was aligned with policy.”

Those phrases feel professional. They’re also useless when something breaks.

At some point, accountability has to collapse to a person — or a clearly defined authority — someone who accepted that the decision might age badly, and took responsibility anyway.

That’s not about blame. It’s about reality.

Decisions Are Made From Inputs, Not Vibes

No decision appears out of nowhere.

There were inputs:

  • Reports

  • Metrics

  • Test results

  • Vendor claims

  • Prior decisions

  • Constraints

  • Tradeoffs

Later, when those inputs are forgotten or lost, the decision starts to look arbitrary — or worse, malicious.

Being able to say “this is what we relied on at the time” changes the entire tone of accountability. It turns hindsight judgment into reconstruction instead of speculation.

Constraints Matter More Than Intentions

Intentions are easy to rewrite after the fact. Constraints aren’t.

What was explicitly not allowed? What boundaries were in force? What limits were assumed to hold?

When those aren’t recorded, systems quietly expand beyond what anyone meant them to do, and nobody notices until the edge cases hurt someone.

Time Is the Missing Dimension

Here’s the part almost nobody treats as first-class: Decisions are not permanent truths. They are bets made under uncertainty.

Milk expires. Certifications expire. Passwords expire. But decisions? We treat them like they’re eternal. They aren’t.

Most decisions should have a shelf life. A moment where someone has to look again and say:

“Is this still what we want to rely on?”

Not because something went wrong, but because time passed.

What Happens When You Don’t Do This

When decisions aren’t recorded in a durable, time-bounded way, a few predictable things happen:

  • Responsibility diffuses

  • Context gets reconstructed after the fact

  • Reviews become arguments

  • Oversight becomes performative

  • Everyone relies on vibes and institutional memory

And when pressure arrives, the system fails not loudly, but ambiguously.

Ambiguity is the enemy of accountability.

The Boring Fix

The fix isn’t revolutionary. It’s almost insultingly simple.

Leave a receipt. Not a novel. Not a manifesto.

Just enough to answer, later:

  • What decision was made?

  • Who owned it?

  • What system was used?

  • What inputs mattered?

  • What constraints applied?

  • How long was this supposed to be valid?

  • What would force us to revisit it?

That’s it.

No ethics embedded. No enforcement. No guarantees of correctness.

Just a record that the decision happened, and a reminder that it wasn’t meant to last forever.

Citation / Further Reading

This essay accompanies the publication of AI OSI Core v0.1, a minimal, open standard for recording and expiring AI-related decisions.

You don’t need the standard to get the point. It exists for when writing things down stops being optional.

Previous
Previous

Trust Us, the Algorithm Said

Next
Next

The End of the Excuse Economy