Meta pays Google billions for cloud services despite $65 billion data centre spending spree
Tech giants' AI arms race forces them to become each other's biggest customers
Mark Zuckerberg is building a data centre in Louisiana nearly the size of Manhattan. He's also handing Google over $10 billion for cloud services across six years.
This apparent contradiction captures the defining tension of the AI era: even unlimited money cannot solve capacity constraints quickly enough. Despite Meta's staggering $65 billion infrastructure budget for 2025, the company still needs Google's help to power its artificial intelligence ambitions.
The deal, confirmed by two people familiar with the matter, reveals how AI's computational hunger is reshaping the technology industry's most fundamental assumption. For decades, the biggest tech companies could build whatever they needed in-house. Those days are over.
Instead, we're witnessing the emergence of a new competitive model where sworn enemies become indispensable partners. Amazon hosts Netflix's global streaming empire. Apple trains its AI models on Google's servers. Microsoft's OpenAI partner quietly rents computing power from Oracle. The very companies that compete for our attention, data, and wallets now depend on each other for the infrastructure that powers their rivalry.
When $320 billion isn't enough
The numbers are staggering even by Silicon Valley standards. Meta, Amazon, Microsoft, and Google will collectively spend $320 billion on infrastructure in 2025—up from $230 billion last year. To put this in perspective, that's more than most countries' entire GDP, invested by just four companies in a single year.
Yet this astronomical spending cannot keep pace with AI's appetite for computing power. The problem isn't money—it's time. Building a new data centre takes 18 to 30 months, according to AWS. Training a cutting-edge AI model requires thousands of specialised chips working in perfect harmony. The mismatch between explosive demand and fixed construction timelines forces even giants to rent capacity whilst building their own.
Meta's predicament illustrates the challenge. The company expects its AI infrastructure needs to grow exponentially as it integrates artificial intelligence across Facebook, Instagram, and WhatsApp for billions of users. Its massive capital investment will eventually create that capacity, but "eventually" isn't fast enough when competitors are moving at AI speed.
This temporal mismatch has created an unprecedented seller's market for AI infrastructure. Global cloud spending hit $84 billion in just the third quarter of 2024, growing 23% year-over-year. Companies that once proudly proclaimed their independence now compete desperately for access to their rivals' excess capacity.
Google's secret weapon: the decade-long bet that paid off
The Meta deal hinges on something Google's competitors cannot easily replicate: a technology called Tensor Processing Units, or TPUs. While this sounds like technical jargon, TPUs represent perhaps the most consequential strategic investment of the past decade.
In 2013, Google realised deploying AI at scale would require doubling its data centre footprint—an impossibly expensive prospect. Instead of simply buying more conventional hardware, the company embarked on a radical experiment: designing its own AI chips from scratch.
The bet paid off spectacularly. Google's latest TPUs can train AI models 2.8 times faster than previous generations whilst consuming dramatically less energy. More crucially, industry analysis suggests Google achieves AI computing at roughly 20% the cost of companies relying on Nvidia's graphics cards—a 4x to 6x advantage that money alone cannot match.
This explains why even OpenAI, despite being bankrolled by Microsoft, has begun using Google's TPUs. The cost savings are simply too large to ignore. When your monthly computing bill runs into tens of millions, a 4x efficiency improvement becomes existential.
Other tech giants are scrambling to catch up. Amazon has developed its own AI chips called Trainium and Inferentia. Microsoft is reportedly working on custom silicon. But chip development takes years, and Google's decade-long head start has created a moat that deepens with each generation.
The TPU advantage reveals a profound truth about modern competition: in the AI era, whoever controls the most efficient infrastructure holds disproportionate power. Google isn't just renting computing cycles to Meta—it's leveraging a strategic asset that took a decade to build and cannot be quickly replicated.
The multi-cloud imperative reshaping Silicon Valley
Meta's partnership with Google reflects a broader industry transformation towards what insiders call "multi-cloud" strategies. Today, 81% of companies use multiple cloud providers, up from virtually zero a decade ago.
This isn't about corporate indecision—it's about survival. Each cloud provider has developed distinct strengths that mirror their underlying businesses. Amazon's AWS offers the broadest range of services, reflecting its operational complexity as a retailer. Microsoft's Azure integrates seamlessly with corporate software, leveraging decades of enterprise relationships. Google Cloud excels at AI and data analytics, built on the company's core competencies.
No single provider can optimally serve every workload, forcing companies to mix and match. Netflix runs entirely on AWS, enabling global expansion without building data centres. Spotify migrated completely to Google Cloud, shutting down its own servers. Apple trains AI models on Google's infrastructure whilst developing competing products.
Even Oracle, the database giant known for controlling every aspect of its technology stack, now partners with both Google and Microsoft. The company's cloud infrastructure cannot match the hyperscalers' scale, so Oracle focuses on database software whilst letting partners handle the underlying computing.
This specialisation creates a web of interdependencies that would have been unthinkable in previous technology eras. When IBM dominated computing, companies either used IBM or built alternatives. Today's landscape is far more complex, with each major player dependent on rivals for critical capabilities.
The trend reflects AI's unique demands. Training large language models requires massive parallel computing power. Inference needs low-latency processing close to users. Data storage spans from high-speed access to cost-effective archival. No company can excel at everything simultaneously.
Competition and collaboration collide
These partnerships create a fascinating paradox: companies are strengthening their competitors whilst advancing their own interests. When Meta pays Google billions, it funds the very infrastructure that powers Google's competing AI services. Yet Meta has little choice—the alternative is falling behind in the AI race.
This dynamic challenges traditional antitrust thinking. Regulators typically worry about companies becoming too powerful within specific markets. But AI infrastructure appears to be creating a new model where power is distributed across an interconnected system, with each player holding different types of leverage.
The Federal Trade Commission has raised concerns about cloud computing practices, particularly fees that discourage customers from switching providers. However, the emergence of multi-cloud strategies suggests these concerns may be overstated. Companies appear willing to pay switching costs when benefits justify the expense.
The relationships also create natural checks on any single company's power. If Google attempted to abuse its TPU advantage, customers would accelerate development of alternatives. If Amazon raised AWS prices excessively, companies would diversify their infrastructure more aggressively.
Rather than consolidation, we may be seeing the evolution of a new competitive model—one where success depends on orchestrating the most effective combination of internal capabilities and external partnerships. Companies compete fiercely for end customers whilst collaborating extensively on the infrastructure that enables that competition.
The new rules of technological power
Looking ahead, these interdependencies will likely intensify rather than resolve. AI models are growing exponentially in size and complexity, requiring ever more sophisticated infrastructure. The computational demands of training GPT-4's successor, for instance, will dwarf current requirements.
This creates a future where technological leadership depends as much on partnership strategy as internal innovation. Companies must excel at their distinctive capabilities whilst intelligently leveraging others' expertise. Those that try to build everything themselves risk falling behind those that focus resources more strategically.
The financial commitments suggest these relationships will deepen. Meta's $10 billion Google partnership represents one of the largest technology deals ever announced. Similar arrangements are inevitable as companies balance the costs of internal development against the speed of external partnerships.
For investors and policymakers, this evolution presents complex challenges. Traditional metrics of market power—market share, revenue concentration, customer lock-in—may be less relevant in an industry where rivals routinely become each other's largest customers.
The Meta-Google deal signals the emergence of what might be called "competitive collaboration"—a model where the most successful companies are those that best navigate the balance between internal capabilities and external dependencies. In this new paradigm, sustainable advantage comes not from controlling everything, but from orchestrating the most effective combination of resources across an interconnected ecosystem.
This represents more than a tactical shift in procurement strategy. It suggests that the AI era is fundamentally rewriting the rules of technological competition, creating a world where even the most powerful companies must accept interdependence as the price of innovation. The companies that thrive will be those that master this new reality first.