Welcome to my AI Lab Notebook
This is where I study AI not as a product, but as a system shaping human life.
Over time, three themes have defined my work:
1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.
2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.
3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.
These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.
Thanks for reading.
The Work That Found Me
In the rapidly evolving field of AI governance, the need for transparent, accountable structures has never been more urgent. So here’s a little about my journey and how it’s shaping how AI systems are governed, and why it matters so much to me. From the earliest sparks of an idea to a full-fledged framework, my work is about more than just building systems — it’s about ensuring that technology serves humanity, not the other way around. In this post, I reflect on my personal journey, the values that drive me, and where I hope to take this work in the future.
Quiet on the Outside, Building on the Inside
In October, I went a little quiet. The lab went quiet. But that quiet was full of motion. What began as loose sketches of AI philosophy solidified the AI OSI Stack: a structured architecture linking human judgment, governance logic, and technical standards like ISO 42001 and NIST’s AI RMF. Now it has a few formal papers and a Github repo. Alongside it, a new agent prototype, GERDY, began reasoning through compliance tasks autonomously, showing that governance can be both automated and transparent.
Who’s Responsible for AI Job Loss?
From factory floors to corporate boardrooms, AI is already reshaping work. Some jobs vanish outright, others quietly erode into under-employment. We like to say workers can “just upskill,” but access to retraining is uneven and often out of reach for those most affected. Behind every algorithmic shift stand human choices: executives chasing efficiency, investors rewarding cuts, policymakers setting weak guardrails. The question isn’t whether AI eliminates roles, but whether those who benefit take responsibility for those left behind.
Why You Should Care About AI
AI is already part of daily life. It screens job applications, shapes news feeds, and powers therapy tools. The question is not whether AI matters but whether it is trustworthy. Trust rests on four loops. How AI reasons. How it treats people. How it is governed. How it shapes meaning. When these loops are weak, AI becomes invisible yet unaccountable. When they are strong, AI can become infrastructure we rely on. Caring about AI is not optional. It is already shaping choices that define who we are.
Beyond Compliance: Personas as a Reasoning Layer for AI Governance
Compliance frameworks set a floor. They define what organizations must do, but when crises hit, compliance is rarely enough. Leaders need fast reasoning that can withstand pressure and still hold up to audit. Persona architecture provides one path. By simulating structured perspectives such as legal, equity, truth-seeker, and feasibility, leaders can explore diverse angles without losing accountability. Each persona generates options that are resilient in conflict and traceable to evidence. The result is not a replacement for compliance but a complement. Governance becomes adaptive in the moment while still auditable afterward. The power lies in combining philosophy with practice, so that decisions are not only defensible but also credible.