About the Website

  • What’s the focus of this site?

    This is my independent AI lab. I publish essays, critiques, and studies on AI design, alongside experimental personas and frameworks. My goal is to explore how AI can be built with clarity, trust, and purpose — and to stress-test those ideas in public.

  • What makes your work different?

    I don’t treat AI as a generic assistant or a black box. I design custom GPTs and personas with clear roles and reasoning styles. I use them to test how AI could support reasoning, strategy, and human judgment — not replace it. My research blends critique, prototyping, and framework design to show alternatives to today’s engagement-driven AI.

  • Why publish in public?

    Because transparency is part of the research. Sharing prototypes, wins, and failures makes the work accountable and invites others into the process.

About Me

  • What’s your background?

    I started in the arts, trained in IT operations and cloud systems, and now work at the intersection of AI design, governance, and human values. My art background gives me a lens for form and human expression; my IT background grounds me in systems and infrastructure.

  • How does your background influence your approach to AI R&D?

    Blending art and IT gives me a dual lens: creativity and human-centered design from art, plus systems expertise from IT. This combination helps me bridge technical detail with governance and ethics, designing AI that is both functional and people-focused.

  • What’s your mission in AI?

    My mission is to build specialized, ethically grounded AI that empowers leaders and strengthens institutions. I want AI to be judged by how well it supports real-world decision-making, not just by hype or scale. The goal is not only performance, but AI that earns trust, preserves human dignity, and delivers lasting benefit.

  • What values guide your work?

    I work from five principles: intellectual honesty, ethical responsibility, practical innovation, inclusivity, and critical inquiry. These guide me to test assumptions rigorously, design for trust, and prioritize long-term resilience over short-term buzz. Alongside these, my research is shaped by design commitments that run through my frameworks: transparency as infrastructure, dignity as constraint, agency as a goal, authenticity as signal, and trust as the long game.

About the Research

  • What do you mean by “human-centered AI”?

    For me, human-centered AI means tools that enhance human judgment rather than replace it. They should be transparent, auditable, and respectful of dignity, while staying grounded in real-world needs—not just technical performance.

  • What do you mean by “reasoning beyond data”?

    I experiment with AI that reasons step by step rather than relying only on pattern recognition. My REAP (Recursive Emergent Abstraction Program) solver, developed for the ARC Prize, is a Python-based symbolic program synthesis engine. It infers transformations from sparse examples, simulating human-like deduction. Achieving a non-zero score in this competition showed progress toward true reasoning, not just memorization.

  • How do you approach AI governance?

    I see governance as the minimum floor of rules and safeguards—but leaders need more than compliance. My approach pairs governance with reasoning personas like Solomon, which translate concepts like duty, utility, and realism into practical questions. This enables fast, defensible decisions that still map cleanly back to compliance and audit standards.

  • What is your “AI OSI Stack” framework?

    The AI OSI Stack is a layered model I designed to clarify roles, risks, and responsibilities in AI—similar to the OSI model in networking. It separates concerns across seven layers, from hardware up to governance and trust. This makes it easier to identify choke points (like GPU supply chains or API monopolies), embed accountability structurally, and encourage interoperability and specialization.

  • What is your research methodology?

    I work in public: prototyping, stress-testing in high-pressure scenarios, and documenting both wins and failures. I initially tested a multi-AI validation approach—having different models critique each other’s work—but found it led to recursive echo chambers. Now, I prioritize validation from human experts and practitioners to ensure outputs are grounded in real-world relevance.

  • What problems do you see in the AI industry?

    I see systemic flaws: generic assistants that exploit users, tools that don’t deliver ROI, recursive validation loops that create hype, and hiring systems that erode trust. My projects critique these blind spots and test alternatives that emphasize resilience, accountability, and real-world value.

  • What’s your design philosophy for AI?

    I treat AI design as a philosophical practice as much as a technical one. My frameworks start with epistemology — how AI knows — and build outward toward ontology, axiology, and governance. The focus is on creating role-specific systems that are transparent, auditable, and aligned with human values, so they strengthen trust rather than erode it.

About Personas

  • What is your vision for “Persona Architecture” in AI?

    Persona Architecture is my framework for building role-specific AIs with clear mandates and reasoning styles. Unlike generic assistants, which often slip into companionship or “engineered warmth,” personas like Solomon (strategy), PyCode (coding mentorship), or Aristotle (ethics) are purpose-driven and bounded. This approach avoids exploitation, enables trust, and creates more differentiated, long-term value. See the [Responsible Use of AI] page for full details.

  • What are "persona-driven AI systems", and how should readers use them?

    My persona-driven AIs are role-specific systems with clear mandates and reasoning styles—for example, Solomon (strategy) or PyCode (coding). They are prototypes, not generic assistants or finished products. You can try them on the projects page, but they should be treated as experiments for testing ideas and sparking dialogue, not as professional tools. I don’t guarantee their accuracy and I’m not responsible for misuse. See the [Disclaimer & Liability Notice] for full details.

  • Can your personas be used for real decisions?

    The personas are accessible to the public through platforms like ChatGPT, but they are experimental systems with no guarantees. Any application of their outputs in real-world contexts is entirely the user’s choice and responsibility. See the [Terms of Use] for full details.

Usage and Legal

  • Are you affiliated with any AI company or institution?

    No. I operate independently. While I use tools like ChatGPT, Gemini, or Claude in my research, I don’t represent or receive funding from those companies. References to them are for context only. See the [Legal Reference Notice] for full details.

  • Can I cite or republish your work?

    Yes, with attribution. Republishing in full requires permission. For journalists, researchers, or think tanks, I encourage direct citation and linking back to the source. See the [Content Ownership & Attribution Policy] for full details.

  • Do you collect user data from the projects page?

    No. I don’t collect or store personal data from anyone who tries the personas. The experiments run through OpenAI’s ChatGPT platform, so any data handling follows their terms of service, not mine. On my end, my website only uses standard analytics (Squarespace and Google Analytics) to track visits at an aggregate level. I don’t profile, resell, or collect identifying information. See the [Privacy & Cookies Policy] for details.

  • Is this professional advice?

    No. This site is for research and analysis, not legal, financial, medical, or professional advice. The same applies to the AI personas on the projects page: they are experimental prototypes, not professional tools. See the [Disclaimer & Liability Notice] for the full terms.

  • Do you use AI to make this content?

    Yes. I use AI tools for writing, research, and visual content. I collaborate with AI systems but make all final editorial decisions myself.

  • Who owns this work?

    All original writing, experiments, and frameworks here are mine unless cited. References to companies or platforms are for context only and don’t imply endorsement. See the [Content Ownership & Attribution Policy].

  • Where can I read the full legal terms for this site?

    All official policies are linked at the bottom of every page: Disclaimer, Responsible Use, Legal Reference, Privacy & Cookies, Terms of Use, and Content Ownership & Attribution.