Nearly Right

The environmental smokescreen that could reshape AI regulation

Mistral's pioneering environmental study looks more like regulatory positioning than genuine transparency

The victory came in an unlikely place: a small courtroom in Cerrillos, Chile, where local activists had just forced one of the world's most powerful corporations to tell the truth. Google's proposed data centre, the company finally admitted under legal pressure, would consume 7.6 million litres of potable water daily—during one of the region's worst droughts in memory. The revelation sparked outrage, blocked the project, and exposed an uncomfortable reality about artificial intelligence: the technology promising to solve the world's environmental crisis is creating havoc whilst systematically hiding its true impact.

Six months later and half a world away, that uncomfortable reality took on new dimensions. Mistral AI, a French startup barely 18 months old, announced the "first comprehensive lifecycle analysis of an AI model" with the polished confidence of an industry leader. The timing was perfect—just months before European regulators would enforce the world's first AI environmental reporting requirements. The methodology appeared rigorous, validated by leading consultancies and government agencies. The message was unmistakable: transparency problems solved, industry standards established.

But examine the numbers, the timing, and the competitive dynamics more closely, and a different story emerges. One where environmental disclosure serves as regulatory positioning rather than genuine accountability—and where transparency without constraint may provide sophisticated cover for an industry doubling down on exponential growth.

The vanishing trick of modern measurement

On paper, Mistral's figures tell a reassuring story. Large 2 generated 20.4 thousand tonnes of CO₂ equivalent over 18 months—equivalent to roughly 2,000 homes' annual energy consumption or the emissions from 100 transatlantic flights. For a model competing with industry giants, these numbers suggest remarkable environmental progress.

Yet place these figures alongside established benchmarks and the story becomes murkier. GPT-3, with 175 billion parameters, generated 502 tonnes of CO₂ equivalent for training alone. GPT-4, with an estimated 1.76 trillion parameters, would have required exponentially more resources. If Mistral Large 2 matches these capabilities whilst producing vastly lower emissions, they've either achieved a breakthrough in AI efficiency or they're measuring something fundamentally different.

The methodology points towards the latter. Mistral acknowledges this represents "a first approximation" conducted amid "difficulty to make precise calculations when no standards exist for LLM environment accountability." More revealing still, they bundle "model training and inference" into single figures representing 85.5% of emissions—obscuring the critical split between one-time development costs and ongoing usage impact.

This bundling isn't merely technical complexity; it's strategic obfuscation of the industry's central environmental challenge. Research consistently suggests inference consumes more energy than training over a model's lifetime, as AI queries occur "millions of times daily" across search engines, applications, and services. By hiding this breakdown, Mistral's "transparency" actually makes the most important environmental question less visible: how much of AI's impact comes from building models versus deploying them at scale?

The timing tells the story

Mistral's disclosure coincides precisely with EU AI Act requirements taking effect in August 2025, mandating environmental reporting for general-purpose AI model providers. This timing isn't coincidence—it represents strategic positioning in an increasingly regulated market.

Consider the competitive dynamics. At barely 18 months old, Mistral hardly represents the established players with massive existing infrastructure. Google operates thousands of data centres globally; Microsoft's Azure spans 60 regions; Amazon Web Services powers a third of the internet. Yet by establishing their methodology as the "first comprehensive lifecycle analysis," Mistral potentially sets compliance standards that these giants must follow.

Mistral explicitly calls for "mandatory figures to report" using "standardized, internationally recognized frameworks" whilst advocating that users should choose "model size that is best adapted to users' needs." This combination—mandatory reporting using their preferred metrics plus user responsibility for efficiency—creates asymmetric compliance costs favouring smaller, newer players whilst shifting blame to consumers.

The regulatory arbitrage opportunity is clear. Whilst the EU mandates environmental reporting, the UK maintains a "light-touch" approach and the US opts for "voluntary standards." By establishing EU-compliant methodology early, Mistral positions itself advantageously in the world's most regulated AI market whilst competitors scramble to retrofit compliance.

The pattern of environmental promises

To understand why scepticism is warranted, examine the track record of tech environmental claims. Google pledged to replenish 120% of water consumed by 2030 but abandoned its net-zero goal in July 2024 as AI drove emissions up 50% since 2019. Amazon committed to net-zero by 2040 but emissions continue rising.

The mechanisms of this deception are well-documented. Analysis reveals tech companies emit 7.6 times more greenhouse gases than reported by exploiting renewable energy certificate loopholes, purchasing credits that don't require actual renewable consumption at their facilities. Meanwhile, Microsoft, Google, and Amazon maintain contracts helping oil companies extract more fossil fuels using AI whilst simultaneously making climate pledges.

This creates a perverse dynamic where environmental regulation serves competitive rather than environmental purposes. Established players profit from both AI development and fossil fuel extraction. Environmental standards focused on measurement and reporting rather than absolute constraints allow them to maintain both revenue streams whilst appearing responsible.

The scaling impossibility

Mistral acknowledges "a strong correlation between model's size and its footprint" with impacts "roughly proportional to model size." Yet competitive pressure drives ever-larger models whilst deployment scales exponentially. Goldman Sachs projects data centres using 8% of US power by 2030, up from 3%, describing demand growth "the likes of which hasn't been seen in a generation" as all major tech companies go "full throttle on AI."

The fundamental contradiction is stark: environmental impact cannot decrease through efficiency gains alone when both model sizes and deployment scale grow exponentially. Northern Virginia's data centres alone will require energy equivalent to powering 6 million homes by 2030, whilst plans to decommission coal plants have been delayed due to AI energy demands. Coal plants that were scheduled for closure are now receiving life extensions to meet the surging electricity demand.

This scaling challenge reveals why training-focused environmental metrics are becoming systematically irrelevant. If inference drives environmental impact through usage rather than development, then current regulatory approaches miss the real challenge entirely. Companies can report low "training emissions" per model whilst their inference infrastructure drives exponential environmental impact through deployment scale.

The measurement gaming system

Existing tools for predicting AI carbon footprints have "serious limitations" and cannot model mixture-of-experts models or embodied carbon. In this measurement vacuum, whoever establishes the methodology gains significant competitive advantage.

Mistral's approach—bundling training and inference emissions whilst calling for mandatory industry reporting—creates a regulatory capture opportunity. Their methodology becomes the compliance standard whilst obscuring the metrics that would reveal AI's true environmental trajectory.

The pattern parallels other industries where measurement-focused environmental policy allows continued expansion whilst appearing responsible. Companies optimise for reported metrics rather than actual impact, creating an elaborate accounting exercise that disguises rather than reduces environmental harm.

The broader implications

The environmental costs of AI disproportionately affect the Global South and drought-stricken areas, as seen in Chile where Google's water consumption plans sparked community resistance. Meanwhile, European regulators try to govern AI's environmental impact through measurement requirements rather than absolute constraints.

This approach—governance through transparency rather than limitation—reflects a broader policy failure. Environmental regulation becomes a competitive tool rather than an environmental one, allowing continued expansion whilst providing the appearance of responsibility.

The real question isn't whether AI companies will meet environmental reporting requirements—they will, having learned the optimisation game from renewable energy certificates and carbon offsets. The question is whether measurement-focused regulation can address exponential scaling when the fundamental business model depends on ever-larger deployment.

The choice ahead

The technology sector already produces emissions comparable to global aviation, with AI driving unprecedented energy demand growth. As one researcher warns, environmental costs are "only going to get worse unless there's serious intervention." Yet that intervention cannot come through measurement alone.

Mistral's environmental disclosure, for all its methodological sophistication, exemplifies a broader policy failure: governance through transparency rather than constraint. The company establishes measurement frameworks that favour smaller players, shifts responsibility to users, and calls for "mandatory reporting" whilst avoiding the fundamental question of whether exponential AI deployment is environmentally sustainable.

In Cerrillos, Chilean activists demonstrated that transparency can force accountability when coupled with power to impose consequences. They didn't need sophisticated lifecycle analyses—they needed the ability to say no to projects that threatened their community's water supply during drought.

The contrast is instructive. Local communities can block individual data centres when environmental costs become visible and immediate. But the global AI industry operates under different rules, where environmental governance means optimising reported metrics rather than constraining actual impact.

As European regulators prepare to enforce their measurement requirements, they face a fundamental choice. They can accept a regulatory framework that turns environmental accountability into competitive advantage for companies skilled at compliance gaming. Or they can ask harder questions about whether transparency without limits can address exponential scaling when the underlying business model depends on ever-greater deployment.

The Chilean activists won their fight because they demanded something simple: truth about environmental costs, backed by power to reject projects that imposed unacceptable burdens. Europe's regulators have the first part—sophisticated measurement requirements that will generate impressive disclosure documents. Whether they have the second remains the open question that will determine whether AI's environmental impact is governed or merely measured into sophisticated irrelevance.

#artificial intelligence #climate crisis