Welcome to my AI Lab Notebook
This is where I study AI not as a product, but as a system shaping human life.
Over time, three themes have defined my work:
1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.
2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.
3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.
These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.
Thanks for reading.
The Year Compute Broke Governance: Why Google’s Six-Month Doubling Cycle Signals the Collapse of Human-Time Oversight
The moment that may define the next decade of AI governance arrived quietly inside a Google all hands meeting. A single slide, delivered without drama, stated that Google must now double its compute every six months and pursue a thousandfold increase within five years. This is more than an engineering target. It signals a shift into a form of acceleration that human institutions are not built to track.
Michael Burry Did Not Quit: He Stepped Out of a Market That No Longer Knows What It Is
Michael Burry did not walk away from markets out of fatigue or frustration. He stepped out because the market’s reasoning layer has collapsed. AI shaped disclosures, synthetic earnings, and automated sentiment have produced an environment where value is no longer measurable in human terms. This essay explores why Burry’s exit is not a market call but a warning about the breakdown of governance and meaning inside modern finance. It also reveals why the AI OSI Stack has become an unexpected map for rebuilding an interpretable market before more investors follow him out the door.
How I Could Help BlackRock, Vanguard, and State Street Survive the Coming Governance Shock
Something unusual is happening inside the machinery of American capitalism. What looks like a routine regulatory debate is beginning to reveal the outlines of a much larger struggle for control. The White House is quietly exploring moves that could rewrite how shareholder voting works, and the entire governance system is starting to tremble. If proxy advisers and index giants lose the ability to steer corporate decisions, the balance of power inside public markets could shift overnight. And all of this unfolds at the same time that AI is transforming trust, disclosure, and the very meaning of fiduciary judgment.
Why Community, Culture, and Local AI Will Define the Next Decade
Artificial intelligence is accelerating faster than human comprehension, yet the real crisis is not technical. It is cultural. It is civic. Beneath the AI OSI Stack sits a missing layer that determines who shapes the future and who benefits from it. I explore how local AI, community compute, Indigenous governance models, and decentralized cultural logic can create the civic commons layer that modern AI has lacked. This is a blueprint for reclaiming agency in an era where average is free and acceleration never rests.
The Work That Found Me
In the rapidly evolving field of AI governance, the need for transparent, accountable structures has never been more urgent. So here’s a little about my journey and how it’s shaping how AI systems are governed, and why it matters so much to me. From the earliest sparks of an idea to a full-fledged framework, my work is about more than just building systems — it’s about ensuring that technology serves humanity, not the other way around. In this post, I reflect on my personal journey, the values that drive me, and where I hope to take this work in the future.
Quiet on the Outside, Building on the Inside
In October, I went a little quiet. The lab went quiet. But that quiet was full of motion. What began as loose sketches of AI philosophy solidified the AI OSI Stack: a structured architecture linking human judgment, governance logic, and technical standards like ISO 42001 and NIST’s AI RMF. Now it has a few formal papers and a Github repo. Alongside it, a new agent prototype, GERDY, began reasoning through compliance tasks autonomously, showing that governance can be both automated and transparent.