AI Isn’t a Bubble. It’s Mitosis (With a High Mortality Rate)

There’s a Better Metaphor

We call tech hype “bubbles” because they burst. But AI isn’t deflating. It’s branching.

The better metaphor is mitosis: general-purpose models (the “parent cell”) splitting into specialized lineages (the “daughter cells”) — agents, vertical models, edge deployments, and now even infrastructure megaprojects. Each inherits the transformer DNA, but survival isn’t guaranteed. Unlike biology, AI mitosis is costly and selective: every split demands compute, talent, compliance reviews, and a paying customer base.

That’s why the sharper framing is mitosis, tempered by speciation. Branching is real. But survival is uneven, selective, and shaped by economics, regulation, and trust.

Why “Bubble” Misleads and “Mitosis” Fits (With Caveats)

A bubble assumes one system inflates then collapses. Mitosis assumes replication and specialization, though in AI, attrition is the norm:

  • Vertical models tuned to domains (finance, health, code) rather than one universal brain.

  • Agentic systems that plan, call tools, and coordinate, less chatbot, more orchestrator.

  • Small/edge models that run locally: faster, cheaper, private, but harder to govern.

  • Infrastructure lineages scaling GPU factories to meet demand.

This branching is visible across research, product roadmaps, regulation, and now global compute buildouts. But every “daughter cell” has to fight for viability.

Evidence of the Split and the Survival Questions

1. Vertical “Daughter Cells”

  • Finance: BloombergGPT, a 50-billion-parameter LLM, was trained on ~363 billion financial tokens and ~345 billion general tokens, outperforming comparably sized open models on financial NLP tasks (Wu et al.). Yet adoption remains slow, constrained by compliance and risk management.

  • Medicine: Med-PaLM 2, part of Google’s MedLM family, scored 86.5 percent on MedQA, nearly a 19-point gain over its predecessor, reaching “expert level” in physician evaluations (Singhal et al.). But deployment is limited to pilots and research previews, since FDA clearance cycles for AI/ML medical devices are multi-year and resource-intensive (“Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan”).

  • Code: Code Llama demonstrates developer uptake, but faces commoditization risk, many organizations prefer to fine-tune general models rather than rely on narrow verticals.

2. Agents in Practice, Not Just Chat

  • Research such as ReAct, Reflexion, and Toolformer has shown that LLMs can interleave reasoning with action, deciding when to invoke external tools. Frameworks like AutoGen and LangGraph make multi-agent systems possible, and OpenAI’s introduction of function calling (2023) and its Responses API (2025) formalized agent workflows (OpenAI).

  • Yet orchestration isn’t free. Multi-agent setups multiply monitoring, security, and cost overhead. Unbounded agent behavior is a regulatory risk multiplier. Enterprises already face vulnerabilities like prompt injection and hallucinated tool calls; scaling these into multi-agent systems compounds risk.

3. Edge and Small Models

  • Models like Phi-3 (Microsoft) and Gemma 3n (Google) show that powerful LLMs can shrink to run on phones or single GPUs, delivering better latency, privacy, and cost efficiency (Microsoft; Google).

  • But deployment is messy. Device fragmentation, weak patching, and security vulnerabilities create governance headaches. Worse, shadow AI emerges: employees running unmonitored small models on laptops, bypassing IT oversight.

4. Infrastructure Branching: Million-GPU Factories

Branching isn’t limited to models, it’s happening at the infrastructure layer. In September 2025, NVIDIA and OpenAI announced what they called “the biggest AI infrastructure deployment in history”:

  • At least 10 gigawatts of NVIDIA systems deployed in new data centers.

  • Millions of GPUs dedicated to training and inference.

  • NVIDIA committing up to $100 billion in staged investment as capacity comes online (Salian).

The vision: “AI factories” designed to fuel OpenAI’s next generation of frontier models and inference workloads at global scale.

But here’s where mitosis meets selection pressure:

  • Energy bottlenecks: Ten gigawatts equals roughly ten nuclear power plants. Scaling AI compute now collides with grid constraints, water rights, and ESG scrutiny.

  • Economic fragility: Even with 700M weekly ChatGPT users, monetization lags. Free-tier inference at million-GPU scale risks becoming a money sink unless subsidized by enterprise contracts.

  • Timeline risk: The first gigawatt won’t be online until 2026, long enough for competitors, regulators, and alternative compute paradigms (edge, decentralized) to shift the field.

  • Governance & geopolitics: Multi-gigawatt AI factories are no longer “just data centers.” They’re national infrastructure, inviting scrutiny from regulators, antitrust authorities, and geopolitical rivals.

This is a new “daughter cell” at the infrastructure lineage. But like others, its survival is conditional: dependent on economics, governance, and public legitimacy.

Governance: Differentiation Under Strain

  • EU AI Act (2024): The Act classifies AI systems by risk category, prohibiting some, tightly regulating others, and creating a special class for general-purpose AI, an explicit recognition of “lineages” (European Parliament and Council). Enforcement, however, varies by member state.

  • FDA: The agency maintains an AI-Enabled Medical Devices list, now numbering hundreds of devices, most of them narrow and assistive. In 2025, it also flagged the need to identify devices that build on foundation models, issuing draft guidance for lifecycle management and premarket submissions (U.S. Food and Drug Administration, “Artificial Intelligence (AI)-Enabled Medical Devices”; “Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan”).

  • FTC: The Commission has pursued enforcement actions against deceptive AI claims and biometric misuse, reminding companies that “there is no AI exemption” from existing laws (Federal Trade Commission).

Governance is branching, but slower and more fragmented than the technology it attempts to regulate.

The Real Timeline: Branching and Attrition

  • 2017–2020: Transformer DNA → general LLMs (BERT, GPT-3).

  • 2022: ChatGPT launches, becoming the mass distribution channel.

  • 2023–2024: Vertical specialization (BloombergGPT, Med-PaLM, Code Llama), emergence of agent frameworks, and proliferation of small models.

  • 2025: Agent runtimes (OpenAI Responses API), edge multimodal models (Gemma 3n), and infrastructure megaprojects (NVIDIA–OpenAI partnership). Regulators begin lineage-aware oversight.

  • But, for every Code Llama, Med-PaLM, or GPU factory that thrives, dozens of experiments stall or die before reaching production.

Strategy in a Branching, High-Attrition World

  1. Architect for plurality, prune ruthlessly. Don’t bet on one “brain.” Mix general and specialized models, but kill weak lineages early.

  2. Govern by role, not hype. Clinical AI ≠ HR AI ≠ trading AI. Oversight must be contextual.

  3. Invest in edge carefully. Gains in speed and privacy must be weighed against governance and update risk.

  4. Adopt agentic patterns selectively. Start narrow, with auditable memory and clear stop conditions.

  5. Plan for attrition. Assume many AI tools will disappear. Keep processes portable and vendor-agnostic.

The Sharper Metaphor

Mitosis explains replication and specialization. But in practice, AI resembles Darwinian speciation in a hostile environment:

  • High mutation rate: new forks, variants, and lineages appear constantly.

  • High mortality rate: most fail; a few adapt and thrive.

  • Fitness defined by economics, compliance, and trust, not just benchmarks.

Bottom line: AI isn’t a bubble. It’s mitosis, tempered by speciation. Branching is inevitable, but survival is selective. The winners will be those lineages — and organizations — that can pass the fitness tests of cost discipline, compliance survival, and customer trust.

Key Concepts and Working Terms

  • Tech Bubble: A common metaphor used to describe technology hype cycles, where inflated expectations “burst” and lead to collapse. In this post, the bubble metaphor is contrasted with branching models of AI growth.

  • Mitosis: A biological process where a parent cell divides into daughter cells. Here, it serves as a metaphor for general-purpose AI models splitting into specialized versions, each inheriting the same foundational “DNA” but evolving differently.

  • Speciation: In biology, the formation of new, distinct species through evolutionary processes. Applied here, it highlights that not every AI offshoot will survive; survival is shaped by selective pressures like economics, regulation, and trust.

  • Transformer DNA: A shorthand for the shared architecture (transformer models) underpinning most current AI systems. Just as DNA passes traits to offspring, the transformer structure passes capabilities to derivative models.

  • Selective Pressures: Forces that determine which lineages survive. In biology, these are environmental; in AI, they include compute costs, talent availability, regulatory constraints, and customer demand.

  • Branching: The process of a single lineage splitting into multiple pathways. In this framing, AI branching emphasizes diversification rather than collapse, showing how different models adapt to different environments.

Works Cited

European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence (Artificial Intelligence Act). Official Journal of the European Union, 12 July 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689.

Federal Trade Commission. “Advertising and Marketing on the Internet: Rules of the Road.” FTC.gov, 2023, https://www.ftc.gov/business-guidance/resources/advertising-marketing-internet-rules-road.

Google. “Gemma 3 Models.” AI at Google, 2025. https://ai.google.dev/gemma.

Microsoft. “Introducing Phi-3: Scalable Small Language Models.” Microsoft Research Blog, 2024. https://www.microsoft.com/en-us/research/blog/phi-3-small-language-models.

OpenAI. “Function Calling and Responses API.” OpenAI Developer Documentation, 2023–2025. https://platform.openai.com/docs.

Salian, Isha. “NVIDIA, OpenAI Announce ‘the Biggest AI Infrastructure Deployment in History.’” NVIDIA Newsroom, 22 Sept. 2025. https://blogs.nvidia.com.

Singhal, Karan, et al. “Towards Expert-Level Medical Question Answering with Large Language Models.” arXiv, 2023. https://arxiv.org/abs/2305.09617.

U.S. Food and Drug Administration. “Artificial Intelligence (AI)-Enabled Medical Devices.” FDA.gov, updated July 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices.

U.S. Food and Drug Administration. “Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan.” FDA.gov, 2021. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-software-medical-device-action-plan.

Wu, Shijie, et al. “BloombergGPT: A Large Language Model for Finance.” arXiv, 2023. https://arxiv.org/abs/2303.17564.

Previous
Previous

Hessen on My Mind

Next
Next

Sharing My Voice with the IAPP: Why I Pitched Articles on AI Governance