You Can’t Govern Systems at Internet Scale Without Evidence

A federal judge recently told the State of Texas something institutions are still struggling to internalize: you cannot impose system-wide controls without demonstrating system-wide harm. Texas attempted to mandate age verification across app stores and online services under the banner of child protection. The court stopped it, not because protecting minors is illegitimate, but because the state could not show evidence proportional to the scope of control it sought to impose.

That distinction matters more than the headline suggests.

This was not a free-speech technicality. It was a governance failure. Texas collapsed mandate, mechanism, and justification into a single move: protect children by age-gating everything. What it lacked was an evidentiary chain explaining where harm occurs, how often, under what conditions, and why narrower controls would fail. Courts are increasingly unwilling to accept governance by assertion, especially when the blast radius covers the entire internet.

The judge’s reasoning exposes a pattern I’ve seen repeatedly inside large systems. When institutions lose the ability to localize risk, they respond by spreading controls everywhere. Evidence decays, accountability blurs, and precaution quietly becomes overreach. What looks like decisiveness is often epistemic exhaustion. Instead of explaining risk, systems try to outrun it.

That dynamic is not unique to content moderation or age verification. It is already present in AI governance debates. Blanket restrictions, vague safety claims, and post-hoc documentation requirements are frequently offered in place of traceable harm models. The assumption is that authority alone can substitute for explanation. Courts are signaling that this assumption no longer holds.

What the Texas ruling reinforces is a structural principle: regulation at system scale requires evidence at system scale. Not narratives. Not intentions. Not analogies to other industries. Evidence that can survive scrutiny. Evidence that can be audited, challenged, narrowed, and updated over time.

This is precisely the gap that modern governance keeps falling into. We regulate as if complexity were optional. We design controls without designing the epistemic infrastructure that justifies them. When challenged, those controls fail not because they were malicious, but because they were unsupported.

The court did not reject regulation. It rejected unscoped regulation without proof. That distinction will matter far beyond Texas. As AI systems, platforms, and automated decision infrastructures expand, the question will no longer be whether governments can act, but whether they can explain why a particular intervention exists, where it applies, and how it adapts when conditions change.

Governance that cannot explain itself does not age well. It eventually collapses under legal, operational, or public trust pressure. The Texas law was simply an early stress test.

If institutions want durable authority in high-stakes digital systems, evidence cannot remain an afterthought. It has to be part of the architecture.

Citations

Previous
Previous

Why “Just Trust Us” Isn’t Good Enough

Next
Next

Evidence Is the New Surface of AI Governance