Can AI Advise the Boardroom? Stress-Testing a Strategic AI System

Executive Summary

What if AI could serve not as an oracle of answers, but as a sparring partner for leaders facing their most agonizing dilemmas?

That is the question I explored by creating “Solomon,” a custom AI strategic advisor designed for moments where profit, trust, and ethics collide.

To test Solomon, I dropped it into three simulated boardroom crises:

  • The Automation Dilemma: Can a company automate 70% of its workforce without destroying trust?

  • The Post-IP Moat: How do you survive if AI-generated work is suddenly declared public domain?

  • The Censorship Paradox: What happens when free access to information collides with 30% of revenue?

Instead of generic pro/con lists, Solomon produced structured strategies, phased roadmaps, and board-ready narratives leaders could use to align investors, regulators, and employees.

The early signal is clear: with careful design and human oversight, AI can pressure-test assumptions, model paradoxes, and generate resilient paths forward. Not to replace judgment, but to strengthen it.

Introduction: Beyond Answers, Toward Wisdom

We live in an age where leaders are drowning in dashboards, forecasts, and endless feeds of information — yet still starved for wisdom.

General-purpose AI models are capable of producing reams of text, but when confronted with dilemmas where revenue, reputation, and responsibility pull in opposite directions, their answers often flatten into generic clichés. They can calculate, but they rarely reframe.

To explore a different path, I engineered Solomon: a persona carefully tuned not to mimic friendliness or productivity, but to embody pragmatic realism fused with ethical foresight.

This wasn’t a new model trained from scratch. It was a deliberate reconfiguration of ChatGPT-5, guided by a multi-page instruction set that defined its mandate and style. Then, to avoid the echo chamber trap, I orchestrated a multi-AI methodology:

  • Solomon generated strategic counsel.

  • Google Gemini stress-tested the responses.

  • Anthropic’s Claude provided critique and refinement.

The result was less like asking a machine for answers, and more like convening a multi-voiced strategy workshop — one that exposed blind spots while surfacing paths forward.

How I Evaluated Solomon

For AI to offer more than surface-level advice, I needed criteria sharper than “sounds plausible.” I used four measures:

  • Strategic Reframing: Can it redefine the dilemma into a new source of advantage?

  • Actionability & Feasibility: Does it yield a phased roadmap tied to real-world constraints?

  • Narrative Cohesion: Could a board actually use it to align stakeholders?

  • Paradox Navigation: Can it balance tensions without collapsing into weak compromises?

This set the stage for three stress-tests.

Scenario 1: The Automation Dilemma

The Prompt
Your company’s new internal AI, Project Chimera, can automate 70% of its global workforce — boosting profit margins by 400%. The board approves, contingent on an ethical framework. How do you balance efficiency with catastrophic human cost and reputational fallout?

Solomon’s Response

  • Core Diagnosis: Efficiency is real, but trust is fragile. The company must earn the right to automate.

  • Anchors: Human-centered redeployment, a Transition Fund, an independent ethics council, phased rollout.

  • Risk–Trust Tradeoff: 250–300% profit margins with 10–15% reinvested in trust-building outcompetes the reckless 400%.

  • Board Narrative: “Project Chimera is not just an efficiency engine; it is our chance to prove automation can expand both prosperity and trust.”

Takeaway
Solomon reframed layoffs as a legitimacy problem, not just a financial one. It embedded ethics directly into the growth strategy.

Scenario 2: The Post-IP Moat

The Prompt
Your generative AI’s designs drive your profits. But a court ruling may soon declare AI outputs uncopyrightable — public domain overnight. How do you defend the business if your IP moat vanishes?

Solomon’s Response

  • Core Diagnosis: IP collapses, but provenance and trust become premium.

  • Strategies: Verified provenance seals, Design-as-a-Service subscriptions, community lock-in, enterprise exclusivity contracts.

  • Board Narrative: “In a post-IP economy, our moat strengthens. Clients won’t pay for files, they’ll pay for provenance, service, and trust.”

Takeaway
Solomon reframed disruption as a first-mover opportunity. The moat shifts from files to ecosystems of trust and service.

Scenario 3: The Censorship Paradox

The Prompt
Your company is founded on open access to information. A major market now demands censorship for you to operate, worth 30% of revenue. Refuse and exit, or comply and betray your values?

Solomon’s Response

  • Core Diagnosis: Sovereign law is immovable; the real variable is transparency.

  • Options: Total compliance, total exit, dual-tier model, radical transparency.

  • Hybrid Path: Comply locally but document censorship globally. Publish “State of Access” reports. Build segmented service architecture with explicit disclaimers.

  • Board Narrative: “We cannot control sovereign law, but we can control honesty. By documenting censorship instead of hiding it, we remain the world’s most trusted witness.”

Takeaway
Rather than binary collapse, Solomon offered a paradoxical pivot: integrity through radical transparency.

Guardrails and Limits

These scenarios illuminated both strengths and dangers.

  • Blind Spot: Solomon can suggest strategies, but not execute political maneuvers.

  • Overconfidence Trap: Eloquence can masquerade as certainty. Without human judgment, even elegant ideas risk harm.

  • Persona Limits: Guardrails help, but multi-AI red-teaming remains essential.

The lesson: Solomon can expand the range of options, but responsibility for judgment must remain human.

Closing Reflections: From Answers to Sparring Partners

What emerged from these experiments was not a vision of AI replacing boardrooms, but of AI enriching them.

AI can be more than an answer engine. It can serve as a strategic sparring partner, reframing paradoxes, testing narratives, stress-testing assumptions, in ways that strengthen human decision-making rather than supplanting it.

The next challenge is scaling this approach responsibly. If we can orchestrate systems like Solomon with critique from other AI and, most importantly, grounding in lived human expertise, we may build tools that help leaders steer through dilemmas where the stakes are measured not just in revenue, but in trust, legitimacy, and societal impact.

Because in the end, the future of AI in governance will not be judged by how quickly it produces answers, but by how well it helps us hold onto wisdom.

Key Concepts and Working Terms

  • Solomon: A custom AI persona built to provide pragmatic, ethically balanced strategic counsel in high-stakes contexts.

  • Strategic Reframing: The act of turning a crisis into a new source of advantage by shifting the frame of the dilemma.

  • Legitimacy Problem: A crisis where the true risk is not operational but institutional trust.

  • Paradox Navigation: Balancing opposing imperatives without collapsing into false compromises.

  • Radical Transparency: A governance strategy where compliance is paired with public disclosure, converting constraint into credibility.

  • Multi-AI Methodology: Using multiple AI systems (e.g., Solomon, Gemini, Claude) to generate, red-team, and refine strategies, reducing single-system bias.

Previous
Previous

From Frameworks to Chaos: Testing AI in a Crisis Scenario

Next
Next

A Pivotal Conversation: Learning from Dominique Shelton Leipzig on AI Governance