The Work That Found Me
How I Landed in AI Governance, What I Built, and Why It Matters
Over the past few months, something unexpected unfolded in my life. It was not a plan or a strategy. It was not ambition or positioning. It was not even something I believed could happen to someone like me. Instead, it felt like the work arrived first and only later did I realize that I had stepped into a field I had never consciously aimed for.
I began as an outsider: no platform, no institutional backing, no formal standing in AI governance. Yet my writing started circulating among governance leaders, privacy professionals, board-level advisors, and semantic ontology architects. This shift raised a question for me. What happens when you follow a question so persistent that it pulls you into a domain before you fully understand its gravity?
The question I followed was deceptively simple: How do we make AI systems understandable, auditable, and aligned with human values before the speed of machines overtakes the speed of human judgment? I began with a blog. I ended up with an architecture. That unexpected transition continues to teach me why the work matters, not only for me but for the world we are speeding toward.
When IT Became HR for AI
Several years ago, during one of the more difficult stretches of my professional life, I coined a phrase that began as a joke. It stayed with me far longer than I expected: IT is HR for AI. I meant this in a very literal sense. Humans manage people. Systems will soon manage AI. If we fail to design the management layer with structure and integrity, we risk sliding into opaque automation and ungoverned power.
This phrase lived quietly in the background until 2025 forced it back into view. Something shifted this year. I can only describe it as a move into machine time. Capabilities advanced faster than comprehension. Oversight lagged behind deployment. Governance struggled to keep pace. The result was a kind of vertigo: acceleration without understanding, capability without coherence, and hype without architecture.
That sense of disorientation became the turning point. I could not look away.
Why Writing Became a Form of Inquiry
I did not begin writing to impress anyone. I certainly did not expect people to read what I wrote, let alone share it. Instead, I wrote because something felt fundamentally misaligned in our collective approach to AI. We build systems more quickly than we build the structures required to govern them. Ordinary people are left behind. Meaning erodes. Language warps under algorithmic pressure. Governance frameworks drift toward slogans rather than engineering discipline.
And more personally, I wrote because I did not want AI to become something that happens to people. I wanted it to remain something built with people in mind.
This intention shaped the essays that followed: The Shadow Filter, When Everything Sounds Like a Bot, AI Governance as a Living Practice, Why Community and Culture Will Define the Next Decade, Michael Burry and the Collapse of Market Reasoning, How I Could Help BlackRock, Vanguard, and State Street, and AI OSI Stack. Readers arrived slowly at first, then more quickly, then with surprising precision. The right people found the work.
The AI OSI Stack: The Architecture That Emerged
As I wrote, I began to notice patterns. My arguments were quietly assembling themselves into a layered architecture. I did not set out to build a model. I eventually realized that the work had built itself through me. The AI OSI Stack emerged as a way to govern AI systems with alignment to the EU AI Act, ISO 42001, the NIST AI RMF, fiduciary obligations, civic mandates, human values, meaning integrity, reasoning verification, data provenance, auditability, and technical realism.
Many people treat AI governance as a checklist. I began to ask whether it might instead be an architecture. That question pushed me to design the stack using a structure inspired by networking layers. Instead of routing packets, the layers handle constraints, obligations, reasoning, data, automation controls, cultural context, human oversight, transparency, and semantic alignment.
It became clear that this was more than a philosophical exercise. It was engineering that looked like philosophy or perhaps philosophy that behaved like engineering. To ground it, I created a working demo: a governance control tower that rendered the OSI Stack into real-time compliance scoring, cross-framework alignment, risk telemetry, audit-ready reasoning trails, machine-interpretable obligations, executive dashboards, and a future-proof evidence schema. It became a prototype of what accountable and transparent AI might look like.
The Response I Did Not Expect
The first major inflection point came when I connected with Dominique Shelton Leipzig, one of the most influential voices in AI governance. She saw value in the work. That validation became fuel. Then editors at the IAPP engaged with my drafts. Then a senior advisor to the National Association of Corporate Directors reached out. Soon after, then a CEO contacted me. His work in semantic architecture, ontology engineering, and machine reasoning aligned precisely with the domains my own architecture had grown into.
None of this was planned. I did not run a campaign. I did not craft a strategy. I wrote, built, and published. And the right people found me.
The Deeper Reason I Care About This Work
My sense of urgency does not come from abstract futurism. It comes from lived experience. I know what it feels like when governance fails. I know how organizations suffer when structures for accountability do not exist. I know the cost of leadership that collapses into self-interest. I know how truth can become dangerous in environments that punish integrity.
I have lived through situations where doing the right thing led to retaliation rather than reform. That history shapes my commitment to building something better for AI.
I do not want governance to be ceremonial. I do not want oversight to become paperwork. I do not want trust to dissolve into a marketing term. I do not want language to lose the clarity that makes meaning possible. And I do not want people to feel powerless in the presence of machines.
I am not a futurist or a hype merchant. I am not trying to save the world. I am trying to build something real. The OSI Stack is my attempt to do that.
Imagining What Comes Next
My goal remains simple but ambitious. I want AI governance to become an engineering discipline. Not a collection of training modules or slogans. Not a compliance theater. A discipline grounded in architecture, constraints, reasoning, transparency, and auditability.
This is why organizations like Interstellar Semantics resonate with me. They treat meaning as infrastructure. They treat ontology as engineering. They operate at the layer where clarity and truth still matter. This is the environment where I want my work to live. Not because I am seeking a job but because this is the first work I have done that feels meaningful beyond my own life.
Why I Will Keep Going
I continue this work without institutional protection, without a formal team, and without anyone instructing me to do it. I continue because everything in my career has pointed toward a single realization. AI is not a product. It is a civic force. It requires structure rather than chaos.
So I will keep building. I will keep writing. I will keep refining the OSI Stack. I will keep advocating for reasoning integrity and meaningful governance. I will keep designing structures that make accountability real.
I do not know exactly where this path leads. I only know that the work is worth doing. For the first time in a long time, I feel like I am doing the work I was meant to do.
Key Concepts and Definitions
AI OSI Stack: A layered architecture for AI governance inspired by the networking model. It includes constraints, obligations, reasoning, data, automation controls, cultural context, human oversight, transparency, and semantic alignment.
Machine Time: A period in which AI capabilities accelerate faster than human comprehension, judgment, or governance structures can keep up.
Meaning as Infrastructure: The idea that semantics and clarity form the structural backbone of any system designed to preserve truth, interpretability, and reasoning.
Governance Control Tower: A prototype interface that renders AI governance into real-time signals, audit trails, compliance scoring, and machine-interpretable obligations.