Welcome to my AI Lab Notebook

This is where I study AI not as a product, but as a system shaping human life.

Over time, three themes have defined my work:

1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.

2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.

3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.

These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.

Thanks for reading.

Security Isn’t an Upsell: Microsoft, Windows 10, and the Compliance Theater of Forced Backups

Security Isn’t an Upsell: Microsoft, Windows 10, and the Compliance Theater of Forced Backups

Microsoft’s attempt to bundle Windows 10 security updates with its OneDrive service feels eerily familiar. As The Verge reported, European regulators pushed back, forcing the company to offer updates without cloud lock-in. This echoes the antitrust battles of the 1990s, when Microsoft’s dominance was tested for leveraging its operating system to force adoption of other products. Today, the tactic is subtler but the pattern is the same: essential safeguards framed as bargaining chips.

Read More
The Shadow Filter: Language, Power, and the Algorithmic Struggle for Authenticity

The Shadow Filter: Language, Power, and the Algorithmic Struggle for Authenticity

In an earlier piece, I wrote about Semantic Version Control — the quiet ways language gets updated, corrected, or erased. The Shadow Filter is its larger frame: language as a site of power. From Qin China’s script reforms to Cold War propaganda, rulers have shaped words to shape thought. Today, algorithms act as new gatekeepers: ATS systems demand keywords, social platforms enforce algospeak, and generative AI flattens voices into statistical averages. The cost is authenticity, as fluency itself becomes suspect. but its effects are not inevitable.

Read More
Hessen on My Mind
Dan Dan

Hessen on My Mind

When I checked my analytics recently, I noticed something unexpected: a cluster of readers from Hessen, Germany. At first, I thought it was random — I don’t know anyone there. But digging deeper, I discovered Hessen has become one of Europe’s fastest-growing hubs for AI, technology, and governance. Frankfurt is home to global tech firms and regulators shaping the EU AI Act, while Darmstadt hosts hessian.AI and a powerful supercomputing cluster. Suddenly, it makes a little sense why someone there might be reading my work… and it makes me want to book a flight.

Read More
AI Isn’t a Bubble. It’s Mitosis (With a High Mortality Rate)

AI Isn’t a Bubble. It’s Mitosis (With a High Mortality Rate)

AI is branching like a living system. The general-purpose models we know today are splitting into specialized lineages: agents, vertical tools, edge deployments, and even massive infrastructure projects. Each carries the transformer DNA, but survival is far from guaranteed. Compute costs, regulatory hurdles, and market demand act as selective pressures, shaping which branches thrive. Thinking in terms of mitosis and speciation highlights both the creativity and the fragility of this new phase in AI. The question isn’t whether AI continues, but which lineages endure.

Read More
Sharing My Voice with the IAPP: Why I Pitched Articles on AI Governance

Sharing My Voice with the IAPP: Why I Pitched Articles on AI Governance

Today, I took a leap and pitched three article ideas to the International Association of Privacy Professionals (IAPP). Each pitch grows out of experiments in my AI Lab Notebook . Exploring how AI encodes truth, how governance must adapt in real time, and how AI reshapes work and dignity. The IAPP is a global hub for privacy and governance professionals, and sharing my work with their readership feels like a natural extension of the lab’s mission. Whether or not these ideas are accepted, the act of pitching is itself a step toward dialogue, accountability, and trust.

Read More
Who’s Responsible for AI Job Loss?

Who’s Responsible for AI Job Loss?

From factory floors to corporate boardrooms, AI is already reshaping work. Some jobs vanish outright, others quietly erode into under-employment. We like to say workers can “just upskill,” but access to retraining is uneven and often out of reach for those most affected. Behind every algorithmic shift stand human choices: executives chasing efficiency, investors rewarding cuts, policymakers setting weak guardrails. The question isn’t whether AI eliminates roles, but whether those who benefit take responsibility for those left behind.

Read More
The Internet Doesn’t Forget, So Why Will AI?

The Internet Doesn’t Forget, So Why Will AI?

From clay tablets to cloud backups, memory has always been contested. We assume forgetting is natural, yet for machines it is costly. AI inherits the problem of persistence. Once information is encoded in a model, unlearning is difficult and expensive. The internet became an accidental archive. AI’s memory will be intentional. This makes forgetting less a technical puzzle and more an ethical one. Who decides what vanishes, and who preserves what remains? Governments, corporations, communities, or individuals? That choice shapes the legacy the future inherits. AI will forget only if we force it. The real question is whether it is wise to ask it to.

Read More
Epistemology by Design: My Work with Custom GPTs and the Ethics of Engineered Knowledge

Epistemology by Design: My Work with Custom GPTs and the Ethics of Engineered Knowledge

Custom GPTs do more than execute instructions. They shape the conditions of knowledge itself. Every persona encodes assumptions about what counts as truth and whose voice carries weight. I call this epistemology by design. Done poorly, such systems erase alternatives and limit inquiry. Done well, they scaffold pluralism while still providing direction. The opportunity is to build epistemic partners that expand agency. The risk is dependence on voices that sound objective but are not. When I design these systems, I ask a simple question: what kind of world am I training myself, and others, to inhabit?

Read More
The Python Cognitive Software Engineer

The Python Cognitive Software Engineer

The experiment began with a question. What if AI could reason like a senior developer, not only generate syntax? I built a Python Reasoning Engine that started with rigid rules but soon evolved toward principle-driven guidance. The turning point was subtle but decisive. Rules can complete code, principles can shape judgment. The difference between assistant and collaborator is found in that shift. AI will not replace engineering expertise, but it can echo the mindset that makes expertise valuable. The result is not automation of tasks but augmentation of reasoning.

Read More
Exploring Cognitive Architecture in the Age of Custom GPTs

Exploring Cognitive Architecture in the Age of Custom GPTs

Custom GPTs are moving from toys into infrastructure. History reminds us of symbolic systems that collapsed under rigidity. Today the risk is different. Novelty without reliability. The challenge is to discipline the architecture. Contracts, orchestration, and safeguards turn fragile models into durable frameworks. Cognitive architecture is less about raw power than about trust. The task is not whether artificial minds can be built. The task is whether they will be built with the same care we expect of institutions that govern our lives.

Read More
Stress Testing Artificial Cognition: Building “Decision Insurance” on ChatGPT

Stress Testing Artificial Cognition: Building “Decision Insurance” on ChatGPT

Stress testing AI is not about breaking the system. It is about observing how it fails. I placed GPT-5 into paradoxes, ethical traps, and unsolvable problems. What I found was not collapse but graceful degradation. The reasoning bent but did not snap. From this emerged the idea of decision insurance. AI is not an oracle to replace judgment. It is a safeguard that cushions judgment at its weakest points. The lesson is not perfection but resilience. When the system fails well, it teaches us how to fail better too.

Read More
The Periodic Table of Artificial Cognition: Mapping the Architecture of Machine Reasoning

The Periodic Table of Artificial Cognition: Mapping the Architecture of Machine Reasoning

AI personas feel different for a reason. Some are precise, others poetic, some moral, others playful. These are not quirks. They are cognitive archetypes. By mapping seven distinct modes, I built a periodic table of artificial cognition. Diversity of reasoning is as valuable in machines as in people. It can be orchestrated, balanced, and put into service. The shift is important. We should not only aim for more powerful systems. We should aim for wiser ones. Cognitive diversity, once understood, can be delivered as a service.

Read More