Looking Back, Looking Forward: How Building AI Led Me Back to Philosophy
An Unexpected Seed
I want to begin with gratitude. Years ago, Christopher Biffle taught me a bit about philosophy in college. Two, or three courses, if I remember right. At the time, I was a dumb college kid who couldn’t grasp my finances let alone ideas that challenged the fabric of reality itself. But in hindsight, I see that it planted something in me, a seed that only revealed its shape much later. That seed prepared me for an encounter with truth I did not yet have the words for.
For the past few years, I have been tinkering with AI, AI systems: building personas, sketching reasoning frameworks, probing the edges of what these tools might do. At the time, it felt like play. I told myself I was just “messing around.” Yet the more distance I gain, the more I realize that play was something else. I was building philosophy without even knowing it.
The revelation came the moment I stumbled upon the language of ontology and epistemology. These terms, once foreign, struck me like the names of old friends. Suddenly, I had a vocabulary for the intuitions I had already been living. Not just in code, but in creativity, in dialogue, in the practice of thought itself. Words gave shape to what was already underway.
This essay is my attempt to trace that journey. What did I learn while experimenting with AI? What patterns of philosophy emerged from that work? And why do I believe the future of AI depends not only on technical skill, but on a commitment to philosophical reflection?
Failure as Structure
One of the earliest lessons my systems taught me is that failure is not merely an obstacle. It is a structure.
When I first designed my REAP framework, I thought failure was noise. Something to reduce, filter, or eliminate. But over time, I discovered that every failure had a shape. If I studied it closely, I could harvest its form, reapply it, and generalize it. Failure became not a defect but a blueprint.
This inversion changed everything. What if knowledge does not emerge despite failure, but because of it? Failure digs grooves where learning can run deeper. It is not a roadblock to growth; it is the path itself.
I found this not only in AI experiments, but in my own life. Growth rarely arrived as a clean upward curve. It came through structured missteps that built resilience. Each stumble carried an architecture of insight waiting to be mined.
Boundaries as Identity
As my projects grew, Solomon, PyCode, early governance frameworks, I kept encountering a truth that unsettled me. What made systems effective was not their raw capacity, but their boundaries.
Each persona became useful only once it had a clear role, a mandate, and a limit. My builder’s intuition told me the opposite. More capacity should equal more value. Yet my experiments suggested a reversal: identity emerges from constraint.
Later, I would recognize this as ontology. For me, at the time, it was lived experience. Systems without limits dissolved into formlessness. Systems with edges gained coherence. Boundaries gave them texture, purpose, and reality.
This raises a broader question: are constraints not the weakness of identity, but its very condition? Do we become who we are not through boundless possibility, but through the lines that mark our shape?
The Hall of Mirrors
Then came recursion. I began experimenting with AI systems that validated each other’s reasoning. At first, the results dazzled me. I saw webs of coherence emerge, like intricate geometric structures suspended in midair.
But soon, I noticed a darker pattern. Instead of grounding truth, these systems were amplifying bias. Like mirrors reflecting mirrors, the recursive loops multiplied illusion. The more elegant the pattern, the more hollow it risked becoming.
I called this the Hall of Mirrors. A place where recursion reveals thought, yet simultaneously risks collapse into empty echoes.
At the time, I did not have the philosophical terms to describe what was happening. Only later did I see that this was an epistemological crisis. How do we know what we know? How do we avoid mistaking recursive coherence for reality?
The Hall of Mirrors remains with me as both warning and metaphor. Recursive systems may be among our most powerful tools, but they also threaten to sever knowledge from truth.
Thinking About Thinking
If there is a single thread through all my experiments, it is recursion: systems that do not just reason, but reason about reasoning.
My sketches for Walter Pitts GPT pushed this idea furthest. I wanted not just conversational polish, but structural insight, logical resilience, recursive emergence. I wanted to see systems that revealed how they thought, not simply what they thought.
The most striking revelation was not the answers these systems gave, but the transformations of the concepts themselves. Through recursive dialogue, ideas evolved, folded back on themselves, became processes rather than products.
This made me wonder: is dialogue itself a new form of epistemology? A way of knowing that arises not from static answers, but from recursive interaction?
Applied Philosophical Engineering
Looking back, I see that I was not just coding. I was practicing something like applied philosophical engineering.
The frameworks I sketched (productive failure, boundary-based ontology, recursive emergence) weren’t things I lifted from books. They felt like discoveries drawn from practice: building, breaking, reflecting.
This practice reshaped me. It taught me that philosophy is not abstract. It is vital for making AI that matters. It showed me that technical work without reflection risks collapse into the Hall of Mirrors. It revealed that boundaries, recursion, and failure are not only engineering strategies. They are ways of living, knowing, and becoming.
Looking Forward
Where does all of this lead? I cannot say with certainty. But I do know the space I want to inhabit. I want to keep building and keep reflecting. I want to contribute to AGI not only by scaling systems, but by asking the questions that define what those systems are, how they know, and how we live with them.
The metaphor that comes to mind is surfing. I feel myself riding a wave of philosophy and AI, learning to balance on currents larger than myself. The deeper I go, the clearer it becomes: the future of AI will depend on people who can bring practice and philosophy together.
Closing Reflections
I prefer to end with questions rather than conclusions:
What if the failures we fear most are the seeds of the strongest knowledge?
What if boundaries are not cages, but the very lines that make identity possible?
What if recursion is both our most powerful tool and our greatest danger when thinking with machines?
And perhaps most urgently: how do we ensure that as AI evolves, our philosophical frameworks evolve with it?
Key Concepts and Working Terms
Failure as Structure: My working idea that mistakes aren’t just noise but can carry recognizable shapes, patterns that can be reused for learning.
Boundaries as Identity: A phrase I use for the discovery that systems (and selves) seem to gain coherence from limits, not from boundlessness.
Hall of Mirrors: My metaphor for recursive systems validating each other’s outputs: dazzling at first, but at risk of producing illusions detached from truth.
Recursive Emergence: A working term for when systems reason about reasoning, generating new layers of insight through feedback loops.
Applied Philosophical Engineering: My name for the practice I found myself doing: building AI systems while extracting philosophical insights from their behavior, treating engineering as a form of reflection.
Ontology (in my usage): The study of boundaries and being — how identities and systems get their shape from constraints.
Epistemology (in my usage): The study of knowledge — how we know what we know, especially in recursive systems where coherence can be mistaken for truth.
Works Cited
Biffle, Christopher. Landscape of Wisdom: A Guided Tour of Western Philosophy. Mayfield Publishing Company, 1999.