Welcome to my AI Lab Notebook
This is where I study AI not as a product, but as a system shaping human life.
Over time, three themes have defined my work:
1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.
2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.
3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.
These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.
Thanks for reading.
The Work That Found Me
In the rapidly evolving field of AI governance, the need for transparent, accountable structures has never been more urgent. So here’s a little about my journey and how it’s shaping how AI systems are governed, and why it matters so much to me. From the earliest sparks of an idea to a full-fledged framework, my work is about more than just building systems — it’s about ensuring that technology serves humanity, not the other way around. In this post, I reflect on my personal journey, the values that drive me, and where I hope to take this work in the future.
My Journey Through the Berghain Challenge
When a mysterious billboard appeared in San Francisco showing only strings of numbers, few realized it hid an invitation to an underground coding arena: the Berghain Challenge. Designed by Listen Labs, the game asked players to become the bouncer at Berlin’s most exclusive club—only this time, the line outside was made of data. What follows is a personal reflection on that experiment, and how stepping up to the algorithmic door became a lesson in creativity, probability, and self-trust.
The Internet Doesn’t Forget, So Why Will AI?
From clay tablets to cloud backups, memory has always been contested. We assume forgetting is natural, yet for machines it is costly. AI inherits the problem of persistence. Once information is encoded in a model, unlearning is difficult and expensive. The internet became an accidental archive. AI’s memory will be intentional. This makes forgetting less a technical puzzle and more an ethical one. Who decides what vanishes, and who preserves what remains? Governments, corporations, communities, or individuals? That choice shapes the legacy the future inherits. AI will forget only if we force it. The real question is whether it is wise to ask it to.
Exploring Cognitive Architecture in the Age of Custom GPTs
Custom GPTs are moving from toys into infrastructure. History reminds us of symbolic systems that collapsed under rigidity. Today the risk is different. Novelty without reliability. The challenge is to discipline the architecture. Contracts, orchestration, and safeguards turn fragile models into durable frameworks. Cognitive architecture is less about raw power than about trust. The task is not whether artificial minds can be built. The task is whether they will be built with the same care we expect of institutions that govern our lives.
Looking Back, Looking Forward: How Building AI Led Me Back to Philosophy
What began as tinkering with prompts and personas became something deeper. I realized I was not just building systems but doing philosophy. Every failure marked a boundary, and every boundary revealed structure. AI stopped being only about automation. It became a mirror for identity and meaning. The more I experimented, the clearer it became. Building AI is not separate from reflection. It is philosophy in practice, where mistakes are not obstacles but the very lines that give form to learning.
Innovation as Flow: Navigating AI’s Shifting Current
AI innovation does not move like a straight line across a map. It moves like water. Cascading, reshaping itself, and carrying us with it. To thrive, we must learn to steer, filter, and harness. Speed alone will not save us. What matters is navigation. Just as early explorers survived by learning to read currents, today we must learn to read the turbulence of AI. Progress comes not from acceleration but from resilience in the current.
AI Governance as a Living Practice
Static governance cannot keep pace with AI. Frameworks written once soon become irrelevant. What leaders need are tools for live trade-offs. Dynamic governance treats rules as living practice. Personas, decision briefs, and transparent reasoning make choices visible. The aim is not compliance for its own sake but trust that adapts. Governance must be usable in real time, grounded in philosophy and tested in practice. That is how it becomes credible.
Why We Need Outsider Voices in the AI Conversation
The AI conversation is dominated by insiders. Corporate and academic voices hold the microphone. That dominance creates blind spots and weakens public trust. Outsiders bring the sharp questions insiders avoid. They bring lived experience and values such as fairness, usability, and dignity. If AI is to become legitimate, these voices cannot be invited late. They must be part of design from the beginning. True trust in AI will not be built by insiders alone.
A Pivotal Conversation: Learning from Dominique Shelton Leipzig on AI Governance
I had the privilege of a long conversation with Dominique Shelton Leipzig, a leading authority on privacy and AI governance. The exchange offered insights, resources, and guidance that I could not have accessed otherwise. It marked a turning point in my work, clarifying how governance must blend law, ethics, and lived context. For me, it underscored the importance of mentorship in a field that too often moves faster than reflection. Progress is not only technical. It is also relational.