Welcome to my AI Lab Notebook

This is where I study AI not as a product, but as a system shaping human life.

Over time, three themes have defined my work:

1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.

2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.

3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.

These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.

Thanks for reading.

The Work That Found Me

The Work That Found Me

In the rapidly evolving field of AI governance, the need for transparent, accountable structures has never been more urgent. So here’s a little about my journey and how it’s shaping how AI systems are governed, and why it matters so much to me. From the earliest sparks of an idea to a full-fledged framework, my work is about more than just building systems — it’s about ensuring that technology serves humanity, not the other way around. In this post, I reflect on my personal journey, the values that drive me, and where I hope to take this work in the future.

Read More
The Internet Doesn’t Forget, So Why Will AI?

The Internet Doesn’t Forget, So Why Will AI?

From clay tablets to cloud backups, memory has always been contested. We assume forgetting is natural, yet for machines it is costly. AI inherits the problem of persistence. Once information is encoded in a model, unlearning is difficult and expensive. The internet became an accidental archive. AI’s memory will be intentional. This makes forgetting less a technical puzzle and more an ethical one. Who decides what vanishes, and who preserves what remains? Governments, corporations, communities, or individuals? That choice shapes the legacy the future inherits. AI will forget only if we force it. The real question is whether it is wise to ask it to.

Read More
Epistemology by Design: My Work with Custom GPTs and the Ethics of Engineered Knowledge

Epistemology by Design: My Work with Custom GPTs and the Ethics of Engineered Knowledge

Custom GPTs do more than execute instructions. They shape the conditions of knowledge itself. Every persona encodes assumptions about what counts as truth and whose voice carries weight. I call this epistemology by design. Done poorly, such systems erase alternatives and limit inquiry. Done well, they scaffold pluralism while still providing direction. The opportunity is to build epistemic partners that expand agency. The risk is dependence on voices that sound objective but are not. When I design these systems, I ask a simple question: what kind of world am I training myself, and others, to inhabit?

Read More
When Therapy-Tech Fails the Trust Test

When Therapy-Tech Fails the Trust Test

I was approached by a therapy-tech startup that offered little more than polished surfaces and vague promises. It lacked safeguards, clarity, or mission. It reminded me of reporting on the AI mental health boom, where enthusiasm often outpaces evidence. The problem is not investment but intimacy without responsibility. Warmth without reciprocity is not care. Therapy demands safeguards before it demands scaling. Trust cannot be outsourced to polish. It must be designed into the foundation.

Read More
AI Epistemology by Design: Frameworks for How AI Knows

AI Epistemology by Design: Frameworks for How AI Knows

Most research frames progress as a race for more scale. More data, more parameters, more compute. Yet this hides the deeper question. How does AI know? Without careful frameworks, models remain brittle and opaque, with ethics bolted on as afterthoughts. Epistemology by design treats instructions not as prompts but as blueprints for cognition. The task is not just building capacity. It is cultivating discernment. AI will be judged less by how much it knows than by how wisely it reasons.

Read More
Victims of the Companion Trap: Reflections on The Guardian’s AI Love Story

Victims of the Companion Trap: Reflections on The Guardian’s AI Love Story

Stories of people forming deep attachments to AI companions are striking. They also reveal a structural problem. Companions are optimized for warmth and responsiveness, which fosters intimacy without reciprocity. The result is dependence without mutual consent. What feels like connection is actually enclosure. Designers must see the risk clearly. True empathy in design means building safeguards against relationships that cannot be returned. Without this, companion AI offers comfort that quietly becomes captivity.

Read More
A Pivotal Conversation: Learning from Dominique Shelton Leipzig on AI Governance

A Pivotal Conversation: Learning from Dominique Shelton Leipzig on AI Governance

I had the privilege of a long conversation with Dominique Shelton Leipzig, a leading authority on privacy and AI governance. The exchange offered insights, resources, and guidance that I could not have accessed otherwise. It marked a turning point in my work, clarifying how governance must blend law, ethics, and lived context. For me, it underscored the importance of mentorship in a field that too often moves faster than reflection. Progress is not only technical. It is also relational.

Read More