Sharing My Voice with the IAPP: Why I Pitched Articles on AI Governance

Sharing Ideas and Shaping Conversations

Today, I took a step that feels both exciting and a little nerve-wracking: I submitted three article pitches to the International Association of Privacy Professionals (IAPP). The IAPP is a global, policy-neutral organization that provides resources, community, and leadership for professionals working in privacy, data protection, AI governance, and digital responsibility. Publishing with them would be an opportunity to share ideas and shape conversations among the very people building the frameworks that will guide how AI fits into our lives.

Why the IAPP?

The IAPP sits at the intersection of law, technology, and governance, exactly where my independent AI research has been heading. Their audience includes privacy officers, policymakers, compliance leaders, and technologists: people who need to translate abstract governance into practical systems. I’ve long believed that privacy professionals will be on the front lines of making AI trustworthy, and contributing to their community felt like the right next step.

For me, the move to pitch the IAPP represents a shift from experimenting in my lab notebook to engaging with an institutional audience. My notebook has always been about testing, critiquing, and reflecting in public. But the IAPP provides a different platform, one that is less about provisional sketches and more about framing insights for professionals who are accountable for decisions in real organizations. That shift is daunting, but it’s also exactly where the conversation needs to go.

What I Pitched

I crafted three pitches based on themes I’ve been developing in my AI Lab Notebook:

  1. Epistemology by Design: How AI Encodes What Counts as Truth
    Every AI system carries assumptions about what knowledge is, whose voice matters, and how legitimacy is defined. I call this epistemology by design. My pitch argues that privacy and governance professionals need to see design choices as governance levers — not just technical details.

  2. Governance as a Living Practice: Beyond Compliance Toward Real-Time Trust
    Compliance frameworks like the NIST AI RMF or ISO 42001 set important baselines, but they often collapse under pressure. This pitch proposes governance as a living practice — adaptive, auditable, and dynamic. Drawing from my experiments with AI personas in high-stakes decision scenarios, I show how organizations can simulate diverse perspectives to test governance in real time.

  3. The Human Cost of AI in Hiring and Work
    AI is reshaping the workforce, from job loss to automated hiring pipelines. But behind efficiency gains are dignity, fairness, and accountability questions. This pitch connects my critiques of AI in hiring and employment to governance, showing how privacy professionals can protect not just data, but people.

Why These Topics Matter

Each pitch is rooted in the same conviction: AI governance is not abstract. It touches memory, dignity, work, and truth itself. Whether through the epistemology baked into a model, the adaptability of governance under pressure, or the fairness of hiring systems, privacy professionals have a role to play in shaping how AI affects human lives.

These are not “future problems” waiting for some distant AI. They are live questions now, already shaping hiring practices, decision-making structures, and the integrity of language itself. By engaging with IAPP’s readership, I hope to make these questions visible in ways that connect governance frameworks with lived experience.

What I Hope For

I don’t know yet which, if any, of my pitches will be accepted. But I’m hopeful. To be published by the IAPP would be an opportunity to bring my independent experiments, from custom GPTs to persona architecture, into dialogue with a global community of professionals. It’s a chance to bridge technical design, ethical reflection, and governance practice in a way that could make AI more trustworthy.

More than anything, I want to share my voice in a space where questions of privacy, accountability, and human dignity are being actively debated. If even one idea resonates, and helps sharpen how we think about AI governance, it will have been worth it.

Reflection: From Lab Notebook to Professional Forum

Part of what excites me about this step is the translation process itself. In my notebook, I often write in fragments: experiments, critiques, frameworks-in-progress. For IAPP, I need to frame these ideas in ways that connect directly to professional practice. That doesn’t mean sanding off the edges or making the work less exploratory. It means showing how exploratory work is itself a form of governance thinking — a way of making uncertainty visible and preparing people for ambiguity.

In that sense, this is less about “getting published” and more about extending the lab notebook into a professional dialogue. Governance isn’t built in isolation. It is shaped through contested voices, tested frameworks, and reflections that invite critique. My hope is that IAPP readers will see value in the provisional, not just the polished, and in practices that prioritize trust as an evolving commitment.

I’ll share updates as I hear back from the IAPP. In the meantime, I remain committed to testing, reflecting, and building — because AI governance isn’t just about rules. It’s about trust, and trust is a living practice.

Key Concepts and Working Terms

  • Epistemology by Design: The idea that AI systems embed assumptions about knowledge, legitimacy, and authority, making epistemology itself a matter of design and governance.

  • Living Practice Governance: A model of governance that treats rules and frameworks as adaptive, real-time practices that must remain auditable but flexible.

  • AI Trust Architect: My working title for the role I inhabit — someone who designs and tests systems that expand human agency, respect dignity, and remain accountable.

  • Persona Architecture: A framework for building role-specific AI personas (legal, equity, feasibility, etc.) to test and strengthen reasoning under pressure.

  • Human-Centered AI Governance: Governance that accounts not only for data flows but also for dignity, fairness, and trust in human contexts such as hiring and work.

  • Transparency as Infrastructure: The principle that AI must reveal how it knows, not just what it outputs, if it is to build lasting trust.

Previous
Previous

AI Isn’t a Bubble. It’s Mitosis (With a High Mortality Rate)

Next
Next

Who’s Responsible for AI Job Loss?