The End of the Excuse Economy

Present Conditions, Not Predictions

What follows is not a forecast. It is an account of present conditions.

When elite institutions publicly describe software systems as “employees,” they are not announcing a future of work. They are acknowledging a current belief that economic power may now be exercised without reciprocal obligation, and that this asymmetry may be normalized through language rather than justified through structure.

This is not confusion or exaggeration. It is not regulatory misunderstanding or cultural lag. It is the explicit abandonment of reciprocity, the foundational principle that authority, profit, and legitimacy must be matched by responsibility, coupled with an attempt to reframe that abandonment as inevitability. That attempt fails.

Automation is not the subject of dispute. Automation is inevitable. The question is whether institutions that deploy automation at scale remain bound by the obligations that historically accompanied productive power. Automation without obligation is not neutral. It is illegitimate. No serious counter-argument survives examination by arithmetic, governance theory, or historical precedent.

The Irreducible Asymmetry

At the center of every defense lies a single, immovable fact. AI systems now displace human labor at scale while contributing nothing to the fiscal, legal, or civic systems that labor once sustained. They do not pay payroll taxes. They do not fund social insurance. They bear no professional liability. They absorb no downside risk. They do not participate in civic reciprocity. The systems they replace did all of these things. The difference is not ideological. It is structural.

Every justification offered by elite firms, whether productivity, efficiency, competitiveness, or shareholder duty, attempts to divert attention from this fact. None confronts it directly, because confronting it would require acknowledging that the current arrangement externalizes costs while privatizing gains. This is not a peripheral flaw. It is the core design.

The Failure of Historical Analogy

Defenders often invoke historical precedent. They argue that every major wave of automation was met with similar fears and that, in retrospect, those fears proved unfounded. Employment eventually grew. New forms of work emerged. Productivity gains were absorbed. From this, they conclude that present concerns merely repeat a familiar cycle of resistance to progress.

This analogy fails because it misidentifies what is being automated.

Previous waves of automation primarily replaced human muscle and routine physical coordination. They increased output while leaving intact the institutional mechanisms through which accountability, liability, and fiscal contribution were assigned. Even when work was deskilled, it remained legible to law, taxation, and governance. Displaced workers were reabsorbed into roles that still participated in the social contract.

The current wave is categorically different. AI systems do not merely accelerate execution. They increasingly substitute for evaluation, synthesis, and judgment, which are the very mechanisms through which responsibility is allocated in modern institutions. When judgment is automated, accountability does not migrate. It dissolves. No previous industrial transition permitted institutions to exercise authority while plausibly denying ownership of outcomes at scale. The claim that this has all happened before is therefore not reassurance. It is a category error.

Language as Institutional Laundering

The language used to normalize this shift is not incidental. The term “employee” is not a metaphor. It is a claim.

Historically, employment signified accountability, supervision, replaceability with consequence, legal exposure, and mandatory contribution to shared public systems. Software systems possess none of these attributes. To retain the moral authority of the term while discarding every associated obligation is not linguistic convenience. It is semantic laundering. Familiar categories are repurposed to legitimize a radically asymmetrical entitlement, namely the benefits of labor without its responsibilities.

The Accountability Vacuum

This asymmetry produces an accountability vacuum that is not accidental. In functioning institutions, power is bounded by responsibility. AI systems sever this relationship by design. They influence decisions without owning outcomes. They scale errors without absorbing consequences. They shape judgment without being subject to judgment.

When failures occur, responsibility dissolves into abstraction through references to models, workflows, recommendations, or oversight committees, none of which can be sanctioned in a meaningful sense. Oversight without named liability is not governance. It is engineered deniability.

Fiscal Evasion by Design

Claims that existing corporate taxes suffice are evasions. Payroll taxes are not interchangeable with profit taxes. Payroll funds social insurance, retirement systems, workforce stability, and local public goods. Automation reduces payroll. Reduced payroll weakens or collapses these systems.

Profit taxes are volatile, centralized, aggressively minimized, and structurally misaligned with displacement. Productivity gains do not resolve this mismatch. They intensify it. Economic growth decoupled from wage distribution does not stabilize fiscal systems. It hollowes them out. This is not theory. It is arithmetic.

The Destruction of the Talent Pipeline

The damage extends beyond fiscal systems into institutional continuity. Elite firms historically relied on large junior cohorts, apprenticeship models, and long-term skill formation. Automation allows these firms to consume the outputs of this pipeline while dismantling its inputs.

Entry roles disappear. Training investment declines. Leverage concentrates at the top. This is not efficiency. It is asset stripping. The asset is human capital largely subsidized by the public. No institution may destroy the conditions of its own legitimacy and then demand continued trust.

The Illusion of Human Oversight

The invocation of “human in the loop” does not repair this breach. At scale, it functions as risk laundering. A human who cannot realistically re-derive, audit, or challenge an automated output is not exercising judgment. They are absorbing liability. Institutions understand this and rely on it.

Scale Without Moral Learning

The true hazard of AI is not error. Humans err constantly. The hazard is scale without moral learning. AI systems make the same mistake everywhere, instantly, and cheaply.

Firms deploy them precisely because scale magnifies profit. Refusing to proportionally magnify responsibility is not prudence. It is moral arbitrage.

The False Necessity of Global Competition

When pressed on these points, institutions often retreat to necessity. They argue that even if the arrangement is flawed, restraint is impossible. If one firm or jurisdiction imposes reciprocal obligations, others will not. Competitive pressure will force compliance with the least restrictive standard. Responsibility, they suggest, is incompatible with global markets.

This argument concedes the critique while attempting to universalize it. It does not defend legitimacy. It claims only that no individual actor can afford to behave differently.

History contradicts this claim. Reciprocity has never been enforced through voluntarism alone. It has been embedded through trade regimes, professional standards, liability frameworks, and regulatory coordination. Labor law, environmental protection, financial disclosure, and consumer safety were all once dismissed as threats to competitiveness. In each case, the response was not abandonment but harmonization. Standards became conditions of market access. Compliance became a cost of legitimacy rather than a discretionary expense.

If automation creates value by exploiting regulatory asymmetry, that value is not productive. It is extractive. A race to the bottom is not an economic law. It is a governance failure.

Efficiency as Ethical Evasion

Efficiency, when invoked as a defense, collapses under inspection. Efficiency is not a virtue in isolation. The relevant questions are efficient for whom, at whose expense, and over what horizon.

History is filled with efficient systems that were socially destructive and politically unstable. Efficiency without obligation is extraction optimized by technology. When institutions invoke efficiency as a conversation stopper, they concede the absence of ethical justification.

The Collapse of Consent

The consequences of this breach are not speculative. Social systems operate on consent as much as coercion. People tolerate inequality when they believe rules apply consistently, gains are indirectly shared, and institutions act in good faith.

Automation without obligation destroys that consent. It communicates that individuals are replaceable, their losses are private, and adaptation is their responsibility alone. Stability persists only through inertia. Inertia is not equilibrium. It is delay.

Why Reskilling Is Insufficient

Reskilling does not resolve this. It fails when new roles are fewer than displaced ones, when wage floors collapse, when geographic mismatch persists, and when transition times exceed savings.

Telling displaced workers to adapt while capturing automation gains privately is not empowerment. It is abdication.

The Inevitability of Backlash

Every system that breaks reciprocity generates backlash. The form may vary, regulatory, populist, or reactionary, but the outcome does not.

Institutions that refuse measured reform invite chaotic reform later. Politics cannot be outrun with software.

Silence in response to critiques of this structure is not neutrality. It is acknowledgment that a defense would expose too much.

The End of the Trial Phase

Language changes may delay reckoning, but they cannot prevent it. Copilots, agents, digital workers, and augmentation layers do not alter the underlying fact. When labor is replaced, obligation follows. When decision making is automated, accountability must attach. When productivity gains are privately captured, reciprocity becomes unavoidable.

AI systems are no longer experimental. They are operational, revenue generating, workforce altering, and politically visible. With maturity comes obligation. Institutions that refuse this transition are not innovators at the frontier. They are holdouts attempting to preserve asymmetry until external correction arrives.

Capitalism and the Limits of Legitimacy

This argument is not anti-capitalist. Capitalism has endured precisely because it repeatedly revised its boundaries, absorbing labor law, environmental regulation, consumer protection, and financial disclosure. Each time, elites predicted collapse. Each time, the system adapted.

What capitalism cannot survive is the insistence that markets absolve actors of responsibility. Markets allocate capital efficiently. They do not allocate legitimacy, stability, or public goods without rules.

Conclusion: Obligation Is No Longer Optional

There is no defensible future in which software replaces workers, firms capture the gains, the public absorbs the losses, and institutions retain legitimacy.

Such arrangements do not self-correct. They are corrected from the outside through regulation, litigation, or withdrawal of consent.

That is not a threat. It is a historical pattern.

Works Cited

FinalRoundAI Staff. “McKinsey’s Workforce Now Includes Thousands of AI Agents.” FinalRoundAI, FinalRoundAI, 2025, https://www.finalroundai.com/blog/mckinsey-ai-agents-employees.

Next
Next

From “Don’t Be Evil” to “Trust Us”