Welcome to my AI Lab Notebook
This is where I study AI not as a product, but as a system shaping human life.
Over time, three themes have defined my work:
1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.
2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.
3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.
These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.
Thanks for reading.
From Frameworks to Chaos: Testing AI in a Crisis Scenario
What happens when AI is dropped into a boardroom crisis with fractured alliances and incomplete data? I tested this by simulating a mutiny scenario. Traditional frameworks collapsed under the weight of uncertainty. Yet Solomon adapted, not with formula but with improvisation. One method stood out. By forcing adversaries to steel-man each other’s arguments, conflict transformed into structured dialogue. The exercise revealed AI’s potential as a crisis partner. It does not simply repeat frameworks. It improvises, centering on trust, legitimacy, and power dynamics. In unpredictable conditions, this kind of adaptability matters more than perfection.