Why “Just Trust Us” Isn’t Good Enough

Editor’s note: I haven’t been consistent here. I write intensely, then disappear. Not because the work stopped mattering, but because the world kept demanding attention faster than I could keep a clean record. This post is partly about AI governance. It’s also about finally deciding to treat this site, and the people who read it, with more care and continuity than I have in the past.

——

I didn’t build AI OSI because I’m optimistic about institutions.

I built it because I’ve spent enough time inside systems that were quietly failing to recognize the pattern before the collapse became obvious to everyone else. Long enough to know that things don’t usually fall apart in a blaze of incompetence. They rot. Slowly. Politely. With everyone technically doing their job.

There’s a smell to it. Anyone who’s been around long enough knows it. Not chaos. Not corruption. Something more mundane and more dangerous. Drift. Authority drifting away from responsibility. Responsibility drifting away from evidence. Evidence drifting out of memory altogether. Meetings still happen. Memos still circulate. People still nod at the right moments. And somehow the center doesn’t hold.

Eventually, the whole operation runs on borrowed time and borrowed people. Maybe a few exhausted individuals carrying risks they were never authorized to carry, telling themselves they’ll clean it up later, once things slow down. They never do.

When systems like that finally crack, the response is depressingly predictable. Suddenly process is the villain. Friction is weakness. Constraint is indulgence. What’s needed, people say, is discipline. Control. Someone decisive enough to cut through the mess and take charge.

I’ve heard that speech before. More than once. It never ends well.

People don’t turn to strong rulers because they hate governance. They turn because governance stopped doing its job. When systems stop explaining themselves, trust erodes. When no one can tell who’s accountable, legitimacy evaporates. And when legitimacy evaporates, authority starts looking like something you can simply grab with both hands.

You can see this logic everywhere now — in politics, in companies, and increasingly in the way people talk about AI.

Move fast. Trust us. Trust the founders. Trust the model. Governance is slowing us down.

It’s a comforting story. It’s also bullshit.

What slows systems down isn’t governance. It’s fake governance. Paper policies. One-time reviews. Committees with no teeth. Compliance exercises designed to be passed, not lived with. Accountability theater that looks convincing enough until the lights come up.

We didn’t finish the work. And now people are pretending the work itself was the mistake.

There’s a smaller, more uncomfortable version of this story that lives closer to home.

I haven’t always treated this site — or the people who read it — with the care it deserves. I disappear. I resurface. I write intensely for a while, then let long stretches go by without explanation. Not because I don’t care, but because I care about too many things at once. The world keeps throwing new failures, new risks, new fires into view faster than I can process them, and I chase the work instead of tending the record.

Sometimes I leave the trail half-kept.

That’s on me.

In its own way, this site has been run a little like the governance I’m criticizing — episodic, reactive, driven by urgency rather than continuity. I write when something breaks, when something irritates me, when something feels dangerous enough to demand attention. Then I move on, assuming the thread will still be here when I come back.

Usually it is. Readers are more patient than they should have to be.

The irony isn’t lost on me. I’m arguing for durable authority, continuity, and systems that don’t depend on memory or mood — while occasionally treating my own work like a scratchpad instead of an archive.

That tension is part of why I finally made up my mind about this damn site.

If I’m going to argue that governance has to be engineered, revisited, and maintained under pressure, then my own work has to be handled the same way. Not perfectly. But deliberately. With some respect for the people who keep checking back to see whether I’m still thinking about the same things, or whether I’ve drifted off again.

This isn’t a promise to post on a schedule. It’s not a content strategy.

It’s a decision to treat this place as what it actually is: a record. A place where arguments accumulate instead of evaporate. Where I return to ideas instead of abandoning them mid-sentence. Where continuity matters more than momentum.

That failure — of continuity, of finished work — is especially dangerous in AI.

AI compresses time. It scales decisions. It hides judgment behind interfaces and abstractions that feel clean and neutral until they aren’t. When authority becomes illegible in systems like that, the damage isn’t abstract. It’s operational. It spreads. It lingers long after the people who made the call have moved on to something else.

“Just trust us” isn’t confidence in that environment. It’s negligence.

You don’t prevent abuse by centralizing power. You prevent abuse by making power inspectable. You don’t reduce risk by eliminating friction. You reduce risk by putting friction where it belongs. Before deployment, before scale, before harm becomes irreversible.

That’s why I built AI OSI.

Not because I want control. Because I’ve seen what happens when no one can explain who has it.

I didn’t want a hero. I wanted infrastructure.

Infrastructure doesn’t care how compelling your vision is or how urgent the moment feels. It doesn’t respond to charisma. It asks boring, unfashionable questions that don’t go away just because you don’t like them. Who authorized this? Based on what evidence? Under what constraints? And who is still accountable when the context changes and the easy answers are gone?

AI OSI doesn’t promise speed. It doesn’t promise dominance. It doesn’t promise that the right people will always be in charge.

It promises something quieter and far less exciting, which is usually where the truth lives. If you act, you show your authority. If you decide, you show your work. If you deploy, you remain accountable. And if circumstances change, legitimacy has to be earned again.

No vibes. No loyalty tests. No permanent mandates.

Just layered responsibility. The kind that can survive pressure without collapsing into chaos on one side or command-and-control on the other.

The people arguing for harder rule, tighter discipline, or bypassing democratic process aren’t my enemies. I don’t hate them. I don’t even think they’re wrong about the frustration. They’re warning lights. They show up when governance debt has piled up so high that shortcuts start to feel like courage.

They’re not diagnosing the disease. They’re reacting to untreated symptoms.

The real problem isn’t that institutions are weak. It’s that too many of them were left unfinished, and we kept leaning on them anyway, hoping they’d hold.

AI OSI exists because I don’t want the future of AI governance framed as a choice between incompetence and authoritarianism. That’s a false choice. History has been merciless to people who believed otherwise.

If what I’m building sounds slower, that’s intentional. I’ve seen what speed without accountability looks like. If it sounds boring, even better.

Boring systems are the ones that still work when things get ugly.

And right now, whether we admit it or not, things are getting ugly.

That’s why I care.

This site is where I think in public; the AI OSI work lives at aiosi.org.

Previous
Previous

Against Power That Refuses to Answer

Next
Next

You Can’t Govern Systems at Internet Scale Without Evidence