A Pivotal Conversation: Learning from Dominique Shelton Leipzig on AI Governance

A Moment of Generosity and Insight

Every once in a while, a conversation lands in your life like a compass. Not just a pleasant exchange, not just advice, but a turning point that reorients your trajectory.

I recently experienced one of those moments through a conversation with Dominique Shelton Leipzig, one of the world’s foremost experts in privacy, data ethics, and AI governance. The opportunity was made possible thanks to Langley Allbritton, an AI strategist whose own work I deeply admire. Her thoughtful introduction connected me with Dominique, and both of their generosity reminded me that progress often depends on the bridges others help us cross.

Dominique is more than an authority; she is an architect of trust. As the CEO and Founder of Global Data Innovation, co-founder of the Digital Trust Summit, and creator of the TRUST Framework, she has shaped how governments and global companies alike understand accountability in technology. To spend even a short time in conversation with her was to glimpse the foundations of what “trustworthy AI” will mean in the decade ahead.

And for me personally, it was more than professional. It was a moment of clarity.

Why This Conversation Mattered

My path into AI has been far from conventional. I began in art, worked in IT and cloud infrastructure, and slowly found myself drawn toward the intersection of ethics, governance, and emerging technologies. Along the way, I have often wrestled with questions: Where do I fit? Can a background outside law, academia, or corporate strategy be an asset in this space, or is it a liability?

Speaking with Dominique shifted that frame. It was as if someone had handed me a map of terrain I had only ever glimpsed from the edges. The fog lifted. Suddenly, the winding path of my past didn’t look like detours, it looked like preparation.

Insider Guidance That Changed My Perspective

Dominique’s generosity wasn’t abstract. She offered concrete, actionable steps that reframed both what I could do and how I could contribute:

  • Step into governance roles. State agencies like the California Privacy Protection Agency, federal institutions such as the DOJ, and forward-looking private firms are building capacity for AI governance. Technical experience, paired with an ethical compass, is increasingly in demand.

  • Join the professional community. Organizations like the International Association of Privacy Professionals (IAPP) are not just legal forums; they are places where technologists can bring ground-level insight into how governance actually functions in practice.

  • Write and share. Even short reflections can help bridge the gap between IT realities and governance discourse. The field benefits from perspectives rooted in implementation, not just policy.

  • Stay close to frameworks. Documents like the Blueprint for an AI Bill of Rights and evolving state and federal regulations aren’t just abstract principles, they are the scaffolding of where policy is heading.

  • Invest in relationships. Governance is not just compliance; it is also community. Networks of trust and collaboration determine whose voices are heard when standards are set.

Each point landed less like a checklist and more like a roadmap. Practical directions for a space that too often feels inaccessible.

A Personal Turning Point

I left the conversation not just inspired, but energized in a way I hadn’t felt for years. The path ahead no longer seemed fragmented. My background in IT, infrastructure, and even art was not something to downplay; it was the very diversity of perspective the governance field needs.

Dominique helped me see what I couldn’t on my own: that governance is not only about rules and compliance, but about translation. Translating technical systems into ethical safeguards. Translating human values into operational standards. Translating across disciplines, communities, and lived experiences.

That realization, paired with her generosity, turned doubt into direction.

Looking Forward

AI governance is quickly moving from an optional add-on to the infrastructure of trust itself. Companies will not survive long-term on innovation alone; they will survive by earning legitimacy.

For me, this conversation was more than professional guidance. It was a reminder that while the risks of AI are real, so too is the opportunity to help shape it into something that strengthens dignity, accountability, and human flourishing.

I remain profoundly grateful, to Dominique for her candor and encouragement, and to Langley for opening the door to such an important conversation. What I carry forward is not just advice, but a clearer conviction: unconventional voices are needed in this work, not at the margins but at the center.

Key Concepts and Working Terms

  • AI Governance: The system of policies, strategies, and frameworks guiding the ethical and responsible use of AI. It bridges technical expertise with legal and ethical accountability.

  • TRUST Framework: A model created by Dominique Shelton Leipzig to embed accountability directly into technology, ensuring responsibility is not bolted on after the fact but baked into system design.

  • Trustworthy AI: AI systems that inspire confidence, operating ethically, transparently, and in alignment with societal values.

  • Data Ethics: The field examining the moral implications of data collection, use, and sharing, with a focus on rights, fairness, and social impact.

  • Privacy: The right of individuals to control their personal information, including how it is collected, stored, and disclosed.

  • Blueprint for an AI Bill of Rights: A U.S. policy document outlining protections and principles for AI systems, including transparency, accountability, and safeguards for individual rights.

Previous
Previous

Can AI Advise the Boardroom? Stress-Testing a Strategic AI System

Next
Next

The AI Arms Race in Hiring: Why Everyone Loses