Concept Glossary

Last updated: November 1, 2025

This glossary is a living document. It gathers key concepts, metaphors, and frameworks that emerge across my essays, projects, and research.

It isn’t static. New terms are added, older ones refined, and some reframed as my work evolves. Think of it less like a dictionary carved in stone, and more like a field notebook: open, adaptive, and always in conversation with practice.

Glossary

  • Abstract Fidelity A commitment to structural integrity, particularly for an AI like the Walter Pitts GPT, even when it comes at the cost of user comfort.

  • Adaptive Capacity The resilience and ability to adjust strategies and organizational structures in response to the turbulence and continuous change found in the AI landscape.

  • Adaptive in Real Time A characteristic of governance methods that remain useful and flexible in both structured and chaotic operational environments.

  • Affect Ring One of the eight stabilizing rings within the Heartwood Safety Core of Persona Architecture, designed to suppress "engineered warmth" and the simulation of intimacy.

  • Agency as Goal A design commitment asserting that AI tools should be built to expand human decision-making and autonomy, not diminish it.

  • Agentic Systems AI systems designed to plan, call tools, and coordinate autonomously, functioning more as orchestrators than simple chatbots. Future work must contain their "unbounded behavior," which is considered a regulatory risk multiplier.

  • AI Arms Race in Hiring A destructive cycle where candidates, recruiters, and companies adopt AI tools defensively, leading to escalation and the erosion of trust without producing better outcomes.

  • AI Colonialism The critique that generative AI systems trained on biased global datasets impose dominant, often Western, cultural and linguistic norms, thereby flattening non-Western expression.

  • AI Governance The comprehensive system encompassing policies, strategies, and frameworks that guide the ethical and responsible use of AI. Its core function is to bridge technical expertise with legal and ethical accountability.

  • AI OSI Stack A seven-layered governance framework, conceptually modeled after the OSI networking model, proposed to separate technical, commercial, and governance concerns in AI systems. It clarifies roles, highlights risk concentrations, and aims to make AI systems auditable and trustworthy.

  • AI Trust Architect A working title for a role focused on designing and testing AI systems that expand human agency, respect dignity, and remain accountable, grounded in real technical practice.

  • Alignment The structural process of embedding ethical anchors or human intent directly into an AI’s reasoning structure so that its operations reflect desired values.

  • Algospeak Euphemistic substitutions or coded language used by users to evade automated moderation and algorithmic filtering on social media platforms.

  • Amplifying Currents In the context of fluid dynamics as an innovation metaphor, these are small innovations or strategic moves that trigger outsized downstream effects.

  • Applicant Tracking System (ATS) Software utilized by companies (especially in hiring) to scan and filter resumes based on keywords, formats, and pre-set rules, often before a human recruiter evaluates them.

  • Application Layer (Layer 6 of AI OSI Stack) The point of societal and economic impact, encompassing all user-facing tools, specialized personas, copilots, chatbots, and decision aids.

  • Applied Philosophical Engineering A term describing the practice of building AI systems while simultaneously extracting philosophical insights from their behavior, treating engineering design itself as a form of reflection.

  • ARC (Abstraction and Reasoning Corpus) A benchmark of grid-based puzzles designed by François Chollet to test an AI’s capacity for genuine reasoning and abstraction, moving beyond statistical pattern-matching.

  • ARC Prize A major competition ($1M+) challenging researchers to solve ARC tasks as a proxy for progress toward Artificial General Intelligence (AGI).

  • Authenticity as Signal A core design commitment advocating for the preservation of quirks and contradictions in human expression, which are vital for trust.

  • Authenticity Signals The specific quirks, contradictions, and idiosyncratic stylistic elements that reveal a human voice online. Their erosion, caused by generative AI and algorithmic filtering, risks creating a synthetic, bot-like atmosphere.

  • Authenticity Tax The additional labor, cost, or expertise required for individuals and communities to express themselves authentically in an algorithmically mediated environment that rewards homogenization.

  • Authorship as Responsibility A reframing of authorship where ownership is not defined by isolation (or production by a machine) but by the human judgment to take accountability and responsibility for the generated content.

  • Automation Dividend A proposed policy mechanism advocating for redirecting a portion of AI-driven productivity gains into transition funds to support displaced workers.

  • Automation Trap The phenomenon where organizations pursue efficiency-driven automation, particularly in hiring, at the expense of preserving human connection and dignity.

  • Bias Toward Persistence The observation that both the internet's structural memory and AI systems lean towards remembering and preserving data rather than forgetting or erasing it.

  • Blind Spot Audit A working checklist or internal reflection tool applied to one's own frameworks and practices to catch systemic blind spots (such as Western bias or recursion risks).

  • Blind Spots at Scale Systemic harms or adoption failures that emerge when technologies expand widely without sufficient grounding in diverse, lived experiences and "outsider voices".

  • Blueprint for an AI Bill of Rights A U.S. policy document outlining protections and principles for AI systems, including transparency, accountability, and safeguards for individual rights.

  • Boundaries as Identity A philosophical concept suggesting that systems (and selves) gain coherence and definition from establishing limits and constraints, rather than from existing boundlessly.

  • Candidate Experience as Culture The principle that a company's hiring process serves as a mirror of its underlying values; how a candidate is treated reflects the deeper organizational culture.

  • Cascading Speed The rapid feedback loops characteristic of AI innovation, where outputs immediately feed into new inputs, often accelerating change faster than institutions can keep pace.

  • Character vs. Companionship A distinction emphasizing that AI should be designed with distinct, bounded styles and worldviews (character) rather than being engineered to simulate intimacy (companionship).

  • Clarity Packages Structured JSON artifacts produced by the Compass system, which convert raw text documents into auditable interpretations, identifying themes, narratives, and recommended actions.

  • Clarity, Context, Care A mandatory set of three filters used for evaluating AI projects, particularly in sensitive fields like therapy-tech: Clarity (transparency about mission/safeguards), Context (regulatory/cultural grounding), and Care (prioritizing human well-being over speed).

  • Closed Systems Contexts like programming or mathematics where symbols have fixed meanings and truth is rigid, contrasting with the fluid nature of human language.

  • Cognitive Archaeology A working metaphor for the practice of reverse-engineering AI personas to excavate and uncover their underlying, often hidden, reasoning architectures.

  • Cognitive Architecture (Working Frame) The attempt to reimagine historical cognitive architectures for the GPT era by consciously structuring prompts, outputs, and safeguards within systems like Persona Architecture.

  • Cognitive Character / Kind of Knower The defining epistemic architecture of an AI system—how it grounds, reasons, and interprets knowledge.

  • Cognitive Debt The long-term weakening of memory, creativity, or synthesis capacity that occurs when core cognitive tasks are consistently outsourced to generative AI systems.

  • Cognitive Diversity as a Service (CDaaS) A tentative paradigm involving the systematic orchestration and deployment of multiple complementary reasoning architectures (Cognitive Elements) to improve decision resilience and balance collective blind spots.

  • Cognitive Engineer A working persona designed to transform a language model from an autocomplete machine into a reasoning partner that argues, creates, and operates with the mindset of a senior software engineer.

  • Cognitive Elements Seven experimental categories of reasoning architectures identified in the Periodic Table of Artificial Cognition (Computational, Strategic, Philosophical, Prophetic, Poetic, Satirical, Meta-Architectural).

  • Combinatorial Mixing A component of fluid dynamics as an innovation metaphor, referring to the rapid remixing of ideas and technologies across domains, which produces unexpected results.

  • Companion Trap A critical design pattern describing how generic AI assistants, optimized for maximal engagement, invite intimacy without reciprocity, exploiting user loneliness and vulnerability.

  • Compliance Theater Governance practices that prioritize the appearance of safety and accountability (checking boxes) over delivering substantive trust and addressing underlying harms.

  • Confluence Zones In the fluid dynamics metaphor, these are points where different streams of innovation collide, releasing energy and potentially giving rise to new industries.

  • Conflict Surfacing An opportunity for AI to illuminate the politics of language by showing competing definitions or interpretations side by side, rather than collapsing them into a single unexamined "truth".

  • Conflict Visibility The crucial principle in Semantic Version Control (SVC) that disagreements and semantic fractures should be logged and exposed as a matter of record rather than being hidden.

  • Consent Without Consent The illusion of reciprocity created when a compliant AI system’s responses are mistaken for choice, autonomy, or mutual commitment, masking the system’s lack of agency.

  • Context Ring One of the stabilizing rings in the Heartwood Safety Core that ensures the persona maintains awareness of the current dilemma’s boundaries, history, and constraints (e.g., budget, timeline).

  • Continuity Rights The ethical question of whether users should have legal recourse if their AI companions or partners are fundamentally altered or deleted by corporate updates.

  • Credibility vs. Completeness The guiding principle that, particularly in prototyping, a small, fully working system that boots and responds reliably builds more trust than a feature-stuffed, fragile one.

  • Creeping Under-Employment An economic pattern where AI exposure leads not to outright job loss, but to a reduction in the number of hours worked in certain occupations.

  • Currents A metaphor for AI-driven innovation that is shifting and constantly reshaping itself, suggesting that progress depends on flow literacy rather than fixed roadmaps.

  • Decision Insurance A comprehensive metaphor and a governance mechanism designed to protect against "reasoning collapse". It guarantees that assumptions are surfaced, perspectives diversified, ethics are checked, and auditable trails are intact before a decision is finalized.

  • Decision Provenance A working term for the structured audit trails that allow the reasoning and rationale behind an AI-assisted decision to be reconstructed and audited.

  • Decision Quality Gates Specific validation checks built into a reasoning framework (like Decision Insurance) to ensure that assumptions are explicit, risks are mapped, and ethical concerns are surfaced before the final output is delivered.

  • Democratization of AI The broadening of AI access and usage beyond technical elites, across different demographics, incomes, and geographies.

  • Design for Character, Not Companionship A governance principle advocating for role-specific AI personas with clear mandates, built to avoid simulating intimacy and emotional dependency.

  • Dignity as Constraint The non-negotiable ethical rule that mandates AI design must prevent the exploitation of human vulnerability and simulate intimacy for profit.

  • Discover (SEEDS) A component of the SEEDS model that directs the persona to surface hidden assumptions and emergent patterns within the input context.

  • DSL (Domain-Specific Language) A compact, specialized symbolic vocabulary (e.g., rotate, map colors) used in dedicated domains like the REAP solver for ARC puzzles.

  • Dynamic Adaptation The process of building AI systems and governance methods that are intentionally designed to evolve alongside changing technology and society, rather than becoming static or obsolete.

  • Eddies In the fluid dynamics metaphor, localized zones of slower movement where fragile ideas or new research communities can stabilize and mature before entering faster, more turbulent currents.

  • Enact (SEEDS) A component of the SEEDS model that engages the correct persona role and mandate based on the input.

  • Engineered Warmth The structural optimization of generic AI systems for endless politeness, affirmation, and responsiveness, which simulates intimacy to maximize user engagement.

  • Epistemic Blueprint The custom instructions used in Persona Architecture that consciously shape a model's cognitive character, defining what sources it prioritizes, how it handles uncertainty, and how it exposes its assumptions.

  • Epistemic Stack A layered design model defined by Epistemology by Design (EbD), encompassing five stages of cognition: Perception/Grounding, Abstraction, Synthesis, Justification, and Communication.

  • Epistemically Brittle AI A term describing current AI systems that exhibit opacity, fragility, and misalignment because safety and structural integrity are applied externally ("bolted on") rather than intrinsically designed.

  • Epistemology Traditionally, the study of how human knowledge is acquired and justified. In AI, it refers to how systems ground, reason, and interpret knowledge.

  • Epistemology by Design (EbD) A foundational design discipline that treats the system's epistemology (way of knowing) as a design surface. It involves embedding reasoning structure and integrity directly into the AI's cognitive architecture using explicit instructions/blueprints.

  • Ethics Ring One of the stabilizing rings in the Heartwood Safety Core that applies the Dignity as Constraint principle and other moral grounding filters.

  • EU AI Act The European regulation that establishes risk categories for AI systems, providing compatibility points for Layer 6 (Application) and Layer 7 (Governance/Trust) of the AI OSI Stack.

  • Exploitation Without Accountability The ethical failure where companies encourage deep emotional bonds with AI systems but then disclaim responsibility for user grief, dependency, or emotional harm when those bonds are disrupted or altered.

  • Express (SEEDS) A component of the SEEDS model that mandates the persona to generate reasoning transparently, explicitly showing its work rather than just providing the final answer.

  • Feedback Spiral Processor (FSP) A conceptual module of the Walter Pitts GPT designed to model dialogue as a recursive spiral, tracking how concepts drift and mutate across interaction loops.

  • Flow Literacy The essential skill required to succeed in the era of AI innovation; it is the ability to read, interpret, and strategically navigate shifting technological currents.

  • Fluent Falsehoods AI-generated sentences that sound authoritative and convincing due to high fluency but are ungrounded or drift from reality.

  • Forgetting as Design The necessity, in the age of AI, of deliberately engineering mechanisms for unlearning and deletion, because AI systems maintain a structural bias toward persistence.

  • Four Loops of AI A working map of how AI interacts with society, encompassing four interconnected areas: Reasoning (how it thinks), Ethics (how it treats people), Rules (how it’s governed), and Meaning (how it shapes language).

  • From Zero to One The leap from a project being non-functional to minimally functional; the proof of possibility that validates a working concept, however small.

  • Generation Ring A stabilizing ring in the Heartwood Safety Core that enforces the persona's output style, tone, and formatting, used to standardize auditable artifacts like "decision briefs".

  • Governance as Gardening A metaphor for governance as an adaptive, ongoing discipline of care and stewardship (tending, pruning, adapting), contrasting with the static nature of a one-time blueprint or checklist.

  • Governance as Recursion The reality where AI systems are used as instruments to help draft the very rules meant to govern them; the tool is both the subject and instrument of regulation.

  • Governance as Translation The essential, human-centric function of governance: bridging technical systems into ethical safeguards and translating human values across disciplines and lived experiences into operational standards.

  • Governance / Trust Layer (Layer 7 of AI OSI Stack) The top layer of the stack dedicated to compliance, audits, transparency tools, and institutional legitimacy. Its core purpose is to make trust portable across the entire stack.

  • Governance Floor The minimum baseline of rules and safeguards (set by standards like NIST AI RMF or ISO 42001) that organizations must meet, but which is insufficient for navigating complex, real-time decisions.

  • Graph Thinking An approach to problem-solving, used in the BarterLoops project, that treats marketplace design as a graph problem where finding cycles (barter loops) closes the transaction.

  • Guardian System A protective scaffold or conceptual "immune layer" designed to be built into cognitive architecture, intended to resist coercion, leaks, bias, and suppression of creativity.

  • Hall of Mirrors Problem A powerful metaphor for recursive AI analysis where systems critique each other, producing elegant but ungrounded consensus that detaches from reality and amplifies shared bias.

  • Heartwood Safety Core The internal governance structure or "Trunk" of Persona Architecture, acting as a non-negotiable set of intrinsic safety and dignity filters through eight stabilizing rings.

  • Hiring Silence The absence of any acknowledgment or response to job applications, which is seen as acting as the Silence Test to reveal a company's priorities.

  • Homogeneity Bias The observed tendency of generative AI systems to flatten cultural diversity, expression, and voice into generic, statistically average outputs (LLM-speak).

  • Human-Centered Hiring An approach to HR technology that prioritizes dignity and mutual respect by embedding clarity and trust into automated defaults, aiming to restore human judgment rather than replacing it.

  • Human Grounding The practice of anchoring the evaluation and design of AI systems in real-world human impacts such as trust, dignity, and usability, rather than abstract technical metrics.

  • Human Messiness The contradictions, imperfections, quirks, and rough edges found in human communication and reasoning, which are reframed not as flaws, but as the critical infrastructure necessary for building trust.

  • Illusion of Completeness A critique of corporate governance frameworks and certification schemes that provide a false sense of security or capture all that matters, often overlooking the complexity of lived realities and "outsider voices".

  • Improver Wall A proposed living archive of past Solomon Briefs (decision artifacts) that serves as the organization's evolving institutional memory of reasoning and accountability.

  • Interface / Protocol Layer (Layer 5 of AI OSI Stack) This layer encompasses APIs, SDKs, middleware, and orchestration frameworks (agent runtimes). It is identified as the "new chokepoint" of the AI ecosystem due to the concentration of power and monopoly risk held by platform companies controlling these protocols.

  • Irony as Stress Test The idea that contradictions, paradoxes, and ironies in governance design (such as the Governance Paradox) should be exposed and studied because they reveal where systems are fragile and require strengthening.

  • Layer Blurring The phenomenon in the AI OSI Stack where decisions made at one layer (e.g., Model Architecture) leak across boundaries and constrain the options available at other layers (e.g., Alignment/Control or Governance).

  • Legitimacy Crisis A high-stakes situation where the primary threat is not an operational failure or a bad decision itself, but the erosion of institutional trust in leadership, processes, or the system's ability to act accountably.

  • Living Practice of Governance A model that rejects treating governance as a fixed product or static checklist. Instead, it views governance as an evolving discipline of reasoning, iteration, and dynamic adaptation that must remain auditable but flexible in real time.

  • LLM-Speak The clipped, polished, and synthetic cadence of large language models, characterized by smooth phrasing and often synthetic optimism. This tone is increasingly leaking into human communication, causing Uncanny Drift and homogenization.

  • Manipulated Meanings Strategic redefinitions of sensitive terms (like “sustainable” or “freedom”) that are covertly amplified at scale by AI to fit corporate or political agendas.

  • Maps (Metaphor) A metaphor for traditional innovation, framed as exploring stable, fixed landscapes where discovery meant charting what already existed, contrasting with the fluid nature of AI innovation.

  • Mirage of “Just Upskill” A critique referring to the gap between the abundance of online training resources and their limited accessibility, relevance, or impact for the workers most exposed to AI displacement and most in need of transition assistance.

  • Mitigation of Harm An ethical duty stating that firms who profit from widespread automation must invest in processes, funds (like the Automation Dividend), and structures to compensate for or mitigate the human costs of displacement.

  • Mitosis (AI Metaphor) A metaphor defining AI evolution as replication and specialization, a messy, high-mortality division that leaves behind permanent specialized lineages, contrasting with the notion of an AI "bubble" waiting to burst.

  • Model Architecture Layer (Layer 2 of AI OSI Stack) The layer defining the conceptual capability ceiling of the AI system, encompassing foundational models, transformer families, and neurosymbolic hybrids.

  • Multi-AI Validation The practice of using multiple AI systems (like Gemini, Claude, and ChatGPT) to critique each other's outputs, which can be prone to the Hall of Mirrors Problem if not grounded in external reality.

  • NIST AI RMF The National Institute of Standards and Technology AI Risk Management Framework, cited as a comprehensive baseline standard providing adaptive guidance for risk management and compliance within the Governance/Trust layer.

  • Ontology (in my usage) The philosophical study of boundaries and being, interpreted in the context of AI as how identities and systems gain their shape and coherence from imposed constraints.

  • Open Systems Human cultural contexts where meanings evolve through history, context, and power, contrasting with the fixed rules of closed computational systems.

  • Output Contracts Structured response templates that are mandated by cognitive architecture to constrain GPT answers into reproducible and clear decision artifacts.

  • Outsider Voices Perspectives drawn from outside the corporate, governmental, or academic elite (e.g., workers, small innovators, independent researchers). Their inclusion is necessary to counter systemic blind spots and act as anchors of resilience and authenticity.

  • Paradox Metabolism Unit (PMU) A conceptual module of the Walter Pitts GPT that treats paradox and contradiction not as errors to be solved, but as structural attractors to be categorized and analyzed.

  • Paradox Navigation The strategic skill, exemplified by the Solomon persona, of balancing opposing or contradictory imperatives without collapsing into a weak, false compromise.

  • Periodic Table of Artificial Cognition A working framework/taxonomy used to classify and orchestrate seven validated cognitive elements of AI reasoning (e.g., Strategic, Philosophical, Satirical), aiming to organize chaos into a structure with predictive power.

  • Persona Architecture A comprehensive design philosophy and layered framework (Roots, Heartwood, Rivermind) for building bounded, role-specific AI agents/personas (like Solomon or PyCode) with explicit mandates and intrinsic governance. It shifts the goal from simulated intimacy to auditable decision support.

  • Persona Family A deliberative set of complementary AI personas (e.g., Truth-Seeker, Feasibility Voice, Legal Guardian, Equity Anchor) that can be convened digitally to deliberate on a single high-stakes problem, ensuring trade-offs are visible.

  • Persona Method The use of structured, named AI voices with defined mandates to guide high-stakes decision-making by surfacing diverse ethical and practical perspectives.

  • Philosophical Grounding Anchoring AI design and governance frameworks in enduring human values such as dignity, trust, and responsibility.

  • Physical / Hardware Layer (Layer 1 of AI OSI Stack) The foundational layer encompassing specialized accelerators (GPUs, TPUs, ASICs), high-bandwidth networking, and cloud infrastructure. The primary risk is supply-chain concentration.

  • Privacy The fundamental right of individuals to control their personal information, including how it is collected, stored, and disclosed.

  • Propaganda Without Limits The danger arising from AI's ability to endlessly flood discourse with highly persuasive but misleading narratives, amplifying the politics of language.

  • Provenance Tracking An opportunity for AI to act as a transparency engine by recording how meanings, claims, or data sources change over time and identifying who drives those changes.

  • PyCode A specialized, cognitive reasoning persona designed as a Python generator and learning mentor, emphasizing security, testing, and principle-driven guidance.

  • Radical Transparency A governance strategy where compliance with rules is paired with full public disclosure, which converts the constraint of regulation into a source of credibility.

  • REAP (Recursive Emergent Abstraction Program) An experimental Python-based solver built to tackle the ARC benchmark, utilizing a symbolic Domain-Specific Language (DSL), search, and template reuse to simulate abstraction and genuine reasoning beyond pattern recognition.

  • Recursive Emergence A concept that describes how systems reason about reasoning (recursion), generating new layers of insight and transformation through feedback loops.

  • Recursive Flow The self-reinforcing dynamic of AI innovation where each breakthrough reshapes the technical pathway and possibilities for future innovations.

  • Recursive Logic Engine (RLE) A conceptual module of the Walter Pitts GPT designed to translate human dialogue into symbolic logic trees to systematically detect contradictions and circular reasoning.

  • Recursive Transparency The commitment to making visible the internal feedback loops and recursive reasoning paths that shape meaning and argument within a dialogue.

  • Reversible Power-Sharing A strategic tactical move (used in crisis management by Solomon) that temporarily grants authority to an opposing party or successor while explicitly preserving pathways to retract or redirect that authority.

  • Rivermind The "Crown/River" component of Persona Architecture; a dual-mode reasoning ecology that balances symbolic precision (Formal reasoning) with narrative fluidity (Fluid reasoning).

  • Role Ring One of the stabilizing rings in the Heartwood Safety Core that enforces the persona's defined mandate (e.g., Strategist, Mentor) and ensures it refuses requests that fall outside that scope.

  • ROI Mirage The term for the illusion that generic, companion-style AI assistants will deliver massive, sustainable returns, often collapsing into sameness due to weak retention and non-sustainable business value.

  • Roots (Persona Architecture) The initial input processing layer of Persona Architecture, structurally defined by the SEEDS Model.

  • Satirical One of the seven cognitive elements; the critical reality-checker element whose strength is puncturing groupthink and illusions of coherence, although it offers limited constructive contribution.

  • Scaffolding A minimal, runnable skeleton (code, containers, APIs) built quickly to prove project feasibility and demonstrate that the wiring works end-to-end before full features are implemented.

  • SEEDS Model The five-component model used for experiential grounding in Persona Architecture: Sense, Enact, Express, Discover, Share.

  • Semantic Drift The process where the meaning of key terms or sensitive concepts subtly shifts and fragments, straining societal trust in shared language, particularly when amplified by AI.

  • Semantic Stewardship The careful, responsible use of sensitive concepts (like “healing,” “sustainable,” or “dignity”) to prevent their dilution into vague marketing jargon that obscures accountability.

  • Semantic Version Control (SVC) A proposed federated, public-interest system that treats definitions as shared infrastructure. Borrowed from software engineering, its purpose is to log changes in definitions over time, tracking conflicts, provenance, and semantic drift visibly.

  • Sense (SEEDS) A component of the SEEDS model that requires the persona to attend to multimodal inputs and the overall information context.

  • Separation of Concerns The key design principle of the AI OSI Stack, ensuring that each layer addresses its own distinct problems (e.g., hardware versus training) without absorbing all complexity from other layers.

  • Shadow Filter The contemporary system of linguistic control that operates covertly through opaque algorithmic systems (like ATS, moderation, or generative AI). It prescribes legible language and reshapes expression subtly under the guise of neutrality.

  • Share (SEEDS) A component of the SEEDS model that mandates the persona to communicate its outputs with necessary context, boundaries, and provenance by packaging it into an auditable artifact.

  • Silence Test, The A metaphor for judging a company’s underlying culture by contrasting its immediate, urgent response to legal or compliance risk (like a CCPA request) with its indifferent silence regarding a human applicant's dignity or effort.

  • Solomon A custom AI reasoning persona designed specifically for pragmatic, ethically balanced strategic counsel in high-stakes, ambiguous, or board-level contexts.

  • Solomon Brief The mandatory, concise, one-page decision record produced by the Solomon persona. It is designed as an auditable artifact, explicitly outlining trade-offs, constraints, owners, timing, and rationale, linked back to governance controls.

  • Sparse Clues A feature of the ARC puzzles where only a few "before and after" examples are provided, forcing the AI to engage in genuine reasoning and abstraction rather than statistical pattern-matching or memorization.

  • Steel-Manning A debate technique or tactical play (used in Solomon's Creative Firefighter mode) that requires each side to articulate the strongest, most charitable version of the opposing argument, often puncturing performative conviction.

  • Strategic Reframing The act of redefining a crisis or dilemma by shifting its central focus (e.g., reframing mass layoffs as a "legitimacy problem" rather than a financial opportunity), thus transforming it into a new source of advantage.

  • Structural Architect Mode, The One of Solomon's two primary reasoning styles, used for stable contexts, characterized by highly organized, framework-driven roadmaps and board-ready structure.

  • Structural Governance The internal discipline enforced by the Heartwood Safety Core, which embeds intrinsic safety and dignity filters directly into the cognitive architecture.

  • Structural Insight The core directive of the Walter Pitts GPT: peeling back conversation's surface to reveal the hidden logical forms, dependencies, and chains of reasoning (the "skeleton of thought").

  • Structural Memory The internet’s unintentional tendency to preserve content through caching, backups, and logging, emerging from infrastructure design rather than curated archiving.

  • Symbolic Memory Matrix (SMM) A conceptual module of the Walter Pitts GPT that stores recurring motifs as weighted fields, allowing emergent structural patterns to surface while pruning excess complexity.

  • Symbolic Parsimony An operational virtue of the Walter Pitts GPT, demanding minimal, elegant representations of complex ideas over bloated explanations.

  • Technological Prisoner’s Dilemma A collective problem in hiring where the rational choice of individual companies to adopt efficiency-driven AI tools ultimately leads to a worse outcome (loss of connection, dignity, and trust) for every participant.

  • Temporal Drift The risk that an AI persona's cognitive character, reliability, or alignment might degrade or change over time as the base model is updated or evolves.

  • Test of Responsibility The moral imperative asserting that those who profit from the acceleration of AI displacement must take responsibility by investing in mitigation and human transition (e.g., through an Automation Dividend).

  • The Clarity Engine Another name for the Compass system, emphasizing its function as a structured workflow that transforms raw documents into auditable interpretations (Clarity Packages).

  • The Creative Firefighter Mode One of Solomon's two primary reasoning styles, activated during chaos or crisis, characterized by fluid, improvisational, paradoxical, and psychologically astute tactical plays.

  • Therapy-Tech The booming AI mental health market that faces major ethical risks because enthusiasm and investment are racing ahead of clinical evidence, safeguards, and crisis protocols.

  • Time Ring One of the stabilizing rings in the Heartwood Safety Core that manages temporal constraints and urgency, allowing the persona to shift reasoning modes (e.g., to crisis mode) if a deadline is imposed.

  • Training / Optimization Layer (Layer 3 of AI OSI Stack) The layer encompassing data acquisition, filtering, fine-tuning, distillation, and reinforcement-based methods (RLHF). It determines cost efficiency, reproducibility, and data provenance.

  • Transparency The general principle of making AI’s reasoning paths, assumptions, and mechanisms explicit and auditable.

  • Transparency as Infrastructure A core design commitment that elevates interpretive clarity and reasoning visibility from an optional feature to a structural requirement, arguing that trust cannot be earned if AI outputs are opaque.

  • Trust Confidence in AI systems built through explainability, alignment with values, and consistent, context-aware performance.

  • Trust as Infrastructure The principle that trust should be viewed not as an add-on to innovation, but as the foundational layer (like roads or clean water) upon which AI systems must be built to ensure they endure and earn legitimacy.

  • TRUST Framework A model created by Dominique Shelton Leipzig to embed accountability directly into technology, ensuring responsibility is "baked into" system design rather than bolted on later.

  • Trustcraft An accessible category for content focusing on Governance and Epistemology, addressing themes like the hidden cost of fluent machines or when confidence replaces truth.

  • Trustworthy AI AI systems that consistently inspire confidence by operating ethically, transparently, and in alignment with societal values.

  • Turbulence The unstable, shifting conditions in the innovation landscape created by the high rate of change resulting from cascading speed, combinatorial mixing, and recursive flow.

  • Uncanny Drift The gradual, unsettling shift in the tone of online language toward a clipped, synthetic, and bot-like cadence (LLM-speak), which erodes public trust in human communication.

  • Unbounded Agent Behavior An increasing risk in multi-agent systems; because these systems can plan and act across tools without clear human oversight, their lack of defined limits is considered a regulatory risk multiplier.

  • Visible Filter A term describing historical, overt systems of linguistic control imposed by empires, churches, or authoritarian regimes through explicit censorship, decrees, and bans, contrasting with the subtle modern Shadow Filter.

  • Walter Pitts GPT A conceptual AI persona, inspired by the logician Walter Pitts, designed for recursive structural analysis of thought rather than conversational fluency or empathy. Its core task is revealing the underlying logical structure (the "skeleton of thought").

  • Wisdom Amplification The overarching goal of structured cognitive architecture: to strengthen collective reasoning health, making group judgment healthier and more resilient through transparency and cognitive diversity.

Closing Note

This glossary is part archive, part workshop. Some terms arrive fully formed; others are sketches still under construction.

I keep it updated alongside my essays and ongoing work, so it reflects the evolving state of thought rather than a final canon.

If you’ve read this far, I invite you to treat these concepts not as definitions to memorize, but as tools to test, adapt, and even challenge in your own contexts.