Nearly Right

Anthropic's Claude Code crackdown reveals the fragility of AI's loss-leader economics

The AI company's restrictions on third-party tools expose tensions between developer cultivation and business sustainability that may define the industry's future

The error message offered no explanation. One day the OAuth connection worked; the next it didn't. Developers who had routed their $200 Claude Max subscriptions through third-party coding tools—OpenCode, custom harnesses, anything other than Anthropic's official interface—found themselves locked out. No announcement preceded the change. No transition period softened it.

Within hours, the Terms of Service everyone had skipped past became suddenly relevant. Section 3.2 prohibits using Anthropic's services "to develop any products or services that compete with our Services." The language had always been there. Most developers had simply assumed it meant something narrower than it appeared to say.

They were about to discover otherwise.

The clause everyone ignored

Competitive use restrictions are boilerplate in technology licensing. Microsoft includes them. So does Google. The legal departments that draft these terms rarely expect them to matter; they exist as insurance against scenarios that seldom materialise.

What makes Anthropic's situation different is what the company sells. Claude Code is not a database or an analytics platform with clearly bounded functionality. It is a general-purpose tool for building software—including, potentially, software that competes with Claude Code itself. The restriction that seems reasonable when applied to a CRM becomes philosophically incoherent when applied to a coding assistant. Can a hammer's warranty exclude building other hammers?

Anthropic's engineers have gestured toward a narrow reading. The clause targets model distillation, they suggest—competitors using Claude's outputs to train rival systems. It addresses pricing arbitrage, where bad actors exploit subsidised subscriptions to resell API access. These are legitimate concerns. But the written terms say something broader, and developers have learned that written terms tend to matter more than verbal reassurances when interests diverge.

The anxiety this creates is real. A startup founder building an AI-enhanced development tool must now consider whether her workflow violates terms she cannot renegotiate. A freelancer customising his setup for maximum productivity wonders if optimisation has become infringement. The ambiguity benefits nobody—except, perhaps, Anthropic's lawyers, who retain maximum flexibility to enforce or not enforce as circumstances warrant.

The $3,000 subscription

To understand why Anthropic draws these lines requires understanding what Claude Code actually costs to provide. The $200 monthly Max subscription is, by any honest accounting, a money-losing proposition. Heavy users report consuming what would cost $3,000 or more at published API rates. Some have tracked their usage meticulously; the numbers are not close.

This arithmetic is intentional. Anthropic needs developers building with Claude, talking about Claude, becoming dependent on Claude. The subscription is not a product but a customer acquisition expense—marketing spend disguised as revenue. The economics work only if subsidised users eventually convert to higher-margin enterprise contracts, or if their presence creates network effects that justify the subsidy, or if their usage generates training data that improves the underlying models.

Third-party harnesses broke this logic. They let developers extract the subsidy whilst routing around the lock-in. A user running Claude through OpenCode still costs Anthropic $3,000 in compute; she just isn't generating the ecosystem engagement that was supposed to justify that expense. From Anthropic's perspective, she's a free rider. From her perspective, she paid for a service and wants to use it her way.

Both framings contain truth. The conflict is structural, baked into the loss-leader model at conception. Eventually, the subsidy's beneficiaries and its funders must disagree about what was purchased.

What Borland learned

Software history offers an instructive parallel that most commentary has missed. In the late 1980s, Borland International—then a dominant force in development tools—attempted to prohibit users of its C++ compiler from building competing compilers. The logic seemed reasonable to Borland's lawyers: why should customers use their product to undermine their market?

Developers found the restriction absurd. The entire value proposition of a compiler is generality; it transforms source code into executables without caring what those executables do. A compiler that cannot compile compilers is a contradiction in terms—or at minimum, a tool that has declared itself less useful than it could be.

Borland retreated. The restriction proved unenforceable in practice and corrosive to developer trust in principle. The episode became a cautionary tale about the limits of licensing terms in tool markets, where users expect their tools to be agnostic about what they build.

The parallel to AI coding assistants is imperfect but illuminating. Claude Code's value, like a compiler's, derives from generality. It helps developers build whatever they're building. Restrictions on what can be built don't just limit specific use cases; they undermine the tool's essential promise. Whether Anthropic's leadership has absorbed this history is unclear. Whether they will relearn its lessons seems increasingly likely.

The industry's original sin

Zoom out further and an uncomfortable truth emerges. The AI industry's relationship with intellectual property has always been, to put it charitably, asymmetric.

Anthropic trained Claude on the internet. Not on content it licensed or created—on content that existed, that others had made, that was simply there to be ingested. The legal theories justifying this practice emphasise transformation: Claude does not reproduce training data but creates something new from patterns learned across billions of documents. Fair use, the argument goes, protects this kind of learning.

Perhaps. The courts have not yet ruled definitively. But set aside legality and consider the ethical posture. Anthropic built its capabilities by treating others' intellectual property as raw material requiring no permission. Now it demands strict adherence to its own terms of service, threatening account termination for violations. The asymmetry is stark: consume freely, protect fiercely.

This is not hypocrisy in the simple sense. Companies routinely operate under different rules as buyers versus sellers. But developers, who tend toward libertarian instincts about information and who have watched AI companies hoover up creative work without compensation, find the double standard grating. The perception that these companies socialised their inputs whilst privatising their outputs creates resentment that no PR campaign can dispel.

The trust problem

For individual developers, the practical lesson is ancient but worth restating: platform dependency is professional risk.

The developers who built workflows around Claude Code's particular strengths—its handling of context, its reasoning patterns, its specific failure modes—now discover their expertise has a counterparty. Anthropic can change the terms, block the access, modify the behaviour. The switching costs developers incurred whilst learning the platform become sunk costs if the platform becomes hostile.

This pattern recurs throughout technology history. Twitter's API changes stranded third-party client developers. Facebook's platform pivots destroyed businesses built on what seemed like stable infrastructure. Each generation of developers must relearn that platforms are partners until they aren't, and the moment of divergence rarely comes with advance notice.

AI coding tools may represent unusually dangerous dependencies. They become intimate with developers' workflows in ways that databases and APIs do not. A developer's prompting intuitions, verification habits, and mental models all shape themselves around a specific tool's behaviours. When that tool changes—or when access to it becomes conditional on compliance with terms the developer never anticipated—the adjustment costs are substantial even if invisible.

The migration toward model-agnostic setups already underway represents rational risk management. It also represents a collective action problem: developers spreading bets across providers cannot develop deep expertise with any, tools that cannot retain users cannot justify sustained investment, and the ecosystem underinvests in quality because returns cannot be captured. Platform trust, once lost, damages everyone.

Where this leads

Predicting resolution requires distinguishing between what Anthropic can do, will do, and should do.

The company can enforce its terms. It controls authentication, observes usage patterns, and retains the right to terminate accounts. Whether such enforcement would survive challenge in European jurisdictions—where competition authorities scrutinise dominant platforms more aggressively—remains untested. Whether Anthropic even qualifies as dominant, in a market where OpenAI commands greater overall share, is itself uncertain.

What Anthropic will do likely depends on competitive dynamics. OpenAI has reportedly taken the opposite approach, explicitly permitting third-party harness access to its subscription offerings. If developers migrate toward that permissiveness, Anthropic may calculate that restrictive terms cost more in market share than they protect in ecosystem control. Markets have a way of disciplining overreach when alternatives exist.

What Anthropic should do begins with clarity. The gap between stated intentions—"we genuinely want people building on Claude"—and written terms that prohibit competitive development breeds uncertainty that serves nobody. Developers cannot plan around ambiguity. They can plan around clear rules, even restrictive ones. The company's current posture combines maximal legal flexibility with minimal practical guidance, a combination that optimises for Anthropic's optionality at the cost of developer confidence.

The pattern repeating

Strip away the specifics and a familiar dynamic emerges. The strategies that enabled AI's explosive growth—capital subsidies, permissive IP consumption, developer-first positioning—contain tensions that eventually demand resolution. Loss leaders require eventual profits. Permissive training practices invite restrictive responses from rightsholders and regulators. Developer cultivation requires developer trust, which erodes when platforms demonstrate that their interests and developers' interests can diverge.

None of this is unique to AI. The pattern traces through IBM's unbundling, Microsoft's browser wars, Google's platform evolutions. What differs is velocity. AI companies are compressing decades of platform dynamics into years, discovering in real time that growth-phase strategies create maturity-phase problems.

The developer who rebuilt his workflow after Anthropic's OAuth block, spreading his dependencies across multiple providers, has absorbed the essential lesson. Platforms are infrastructure until they become leverage. The transition happens without announcement. The only protection is diversification—and the recognition that any tool you don't control can become a tool used against you.

That this lesson must be relearned with every generation of infrastructure suggests something about the technology industry that no terms of service can change.

#artificial intelligence