How NACD Leaders Are Validating the AI OSI Stack
You’ll Never Guess What Happened To Me
This week, a senior advisor who has led national cybersecurity strategy for the U.S. Securities and Exchange Commission (SEC) and now directs governance initiatives for the National Association of Corporate Directors (NACD) reached out to discuss collaboration.
And here I thought I should be nervous about the IAPP reaching out.
Anyway, this senior advisor operates in the world of fiduciary responsibility, systemic risk, and board governance. I don’t. But I’m working in in the world of civic design, open protocols, and epistemic architecture. And if those worlds can meet, perhaps we can redefine what “responsible AI” means. Not as public relations, but as policy infrastructure.
The Stack is ready for that bridge. The codebase is clean, the schema public, the verification pipeline running end-to-end, and I have a clean new dashboard in the works. The only remaining question is: who will help run it in the world?
The Moment It Became Real
So, that question, who will help run it, takes me back to the moment I realized the Stack had crossed a threshold from framework to infrastructure.
The turning point was not the completion of the LaTeX manuscript but the first time I ran a full lifecycle test: Intent → Justify → CounterSign → Commit → Update. When the validators passed, governance logic ceased to be theory. It became executable evidence. Something that could be linted, verified, and signed.
In that instant, “accountability as infrastructure” transformed from a phrase into a literal engineering principle. It was no longer just an academic exercise. It was a working architecture for how we might govern machine intelligence at scale.
From Framework to Infrastructure
Most AI ethics frameworks stop at aspiration. They name principles, fairness, transparency, accountability, but rarely describe how those principles should be implemented. They tell us what ought to happen but not how to make it happen.
The AI OSI Stack challenges that limitation. It introduces an OSI-style model for AI governance, where each layer carries its own duties, evidence, and safeguards.
Layer 0: Civic Mandate – Defines who authorizes the system and under what social license.
Layer 1: Ethical Charter – Converts civic values into enforceable norms.
Layer 2: Data Stewardship – Protects rights, consent, and fiduciary data handling.
Layer 3: Model Development – Documents lineage, evaluation, and explainability.
Layer 4: Instruction & Control – Governs prompts, personas, and operational logic.
Layer 5: Reasoning Exchange – Keeps outputs interpretable and contestable.
Layer 6: Deployment & Integration – Manages release, monitoring, and rollback.
Layer 7: Governance Publication – Publishes internal evidence for public review.
Layer 8: Civic Participation – Brings feedback and renewal back into the loop.
Each layer functions as a schema-backed contract. Every “shall” clause connects to a real JSON artifact in the repository. Governance, once a matter of paperwork and aspiration, becomes a matter of protocol and execution.
The AEIP: Turning Trust into Data
At the core of the Stack is the AI Epistemic Infrastructure Protocol (AEIP), a foundational layer that defines the data formats making accountability testable. AEIP structures artifacts such as Decision Rationale Records, Governance Directive Sets, and Integrity Ledger Entries.
Every governance action, every justification, countersignature, and update, is stored as a verifiable record. Together these form what I call a trust ledger: a living chain of reason. The phrase “chain of reason” is not metaphorical. It is an audit trail that allows us to inspect the reasoning paths of AI systems with the same rigor that network engineers inspect packets of data.
This is what makes AI legible to society: the ability not only to review an outcome but to reconstruct the logic that produced it.
Building a Living Constitution
The Stack’s canonical paper, now formalized as AI OSI Stack v5.0, reads more like a constitution than a manifesto. Each section includes a normative clause, a verification table, and a link to a corresponding AEIP schema. Appendices extend into lifecycle escalation, provenance, maturity ladders, and transparency protocols.
The result is a document that can be both ratified and verified. It bridges academia and infrastructure, policy and practice. Each build produces a manifest, an integrity notice, and a reproducible hash chain so that authenticity becomes measurable rather than performative. It is, in every sense, a constitution that compiles.
And when people from the fiduciary world, those who manage systemic risk and institutional trust, take an interest in it, that convergence affirms something deeper: that civic protocol and corporate governance might finally share the same vocabulary for accountability.
Why I Built It Alone
People often ask how one person could build something of this scale. Nobody has actually asked me that, by the way, but I assume it’s happened to others and recently.
But the answer lies in pace and necessity. Institutional reform moves at committee speed; AI moves at machine speed. I wanted to close that gap, to see if governance could evolve fast enough to remain relevant.
So I built it all: the architecture, schemas, paper, dashboards, contract suites, and audits. There was no funding and no backers, only conviction and the assistance of a few good models. Independence mattered. Every decision, every line of code, every clause in the Stack was written for clarity rather than politics.
That independence also meant building something legible to both worlds, the civic and the fiduciary, without belonging exclusively to either. The Stack sits between them, translating shared responsibility into executable form.
The Road Ahead
Now that the Stack is public, the next phase begins: transforming it from framework to service layer. The Protocol Phase will focus on turning AEIP validators into a public API, expanding the dashboard into a civic interface, and partnering with institutions to conduct real audits.
Adversarial integrity sweeps will continue to ensure the system remains honest. The long-term goal is to sustain the project without compromising its independence. The Stack will stay open source at its core but allow commercial applications where they fund further creation.
This next stage will likely depend on partnerships between civic designers and fiduciary stewards. The very intersection where the initial outreach began. One speaks in schemas and reproducibility; the other in risk and oversight. Between them lies the shared future of responsible intelligence.
A Note on Imperfection
The repository is not finished, and that incompleteness is part of its honesty. Some sections from v4 remain unmerged, examples are missing, and several LaTeX chapters still need rewriting.
The audit reports can be unflinching, even brutal. But that is how genuine standards emerge: through iteration, not illusion. And I still need to proofread more than I have left to write. I think, or I hope.
Nevertheless, every failed test points toward a truth that still needs to be earned.
The Point of It All
At its heart, the AI OSI Stack is a hypothesis written in code: that governance should be as rigorous as the systems it governs. We do not need more promises about responsible AI. We need protocols that make ethics measurable, laws testable, and accountability programmable.
That is what I am building, a civic backbone for the age of machine intelligence. And this time, it is not just words.
It is infrastructure. For real people.
Key Concepts & Definitions
AI OSI Stack: A layered architecture for AI governance modeled on the Open Systems Interconnection (OSI) framework, making accountability testable and verifiable.
AEIP (AI Epistemic Infrastructure Protocol): The foundational data protocol that structures governance evidence and ensures verifiable reasoning.
Chain of Reason: A verifiable audit trail that captures and preserves the rationale behind AI decisions.
Accountability as Infrastructure: The design principle that governance mechanisms should be embedded directly into technical systems rather than treated as external oversight.
Living Constitution: A self-verifying governance document that unites normative rules with executable code.