A sweeping new agreement between Anthropic and Google Cloud is throwing into sharp relief just how concentrated — and how enormous — the artificial intelligence boom has become.
Anthropic has committed to spending roughly US$200 billion on Google's cloud services over five years, according to a person familiar with the matter cited by The Information — a figure that would make the start-up one of the largest buyers of computing capacity in history.
The deal suggests Anthropic alone accounts for more than 40% of the revenue backlog recently disclosed by Google's parent, Alphabet — a measure of future income tied to long-term contracts.
The AI gold rush reshaping cloud demand
The agreement reflects a broader shift across the technology industry: a handful of fast-growing AI companies are now driving an outsized share of demand for cloud infrastructure, committing staggering sums to secure the computing power needed to train and run advanced models, according to Reuters.
Anthropic's appetite for capacity has expanded rapidly alongside the popularity of its Claude family of models. The company has struck a series of large-scale agreements across the industry, including a recent partnership with Broadcom and Google for multiple gigawatts of tensor processing unit capacity expected to come online starting in 2027. It has also signed a multiyear deal with CoreWeave and is set to secure additional capacity through Amazon Web Services.
Two companies, half a trillion dollars
Taken together, contracts involving Anthropic and its chief rival, OpenAI, now account for roughly half of the nearly US$2 trillion in revenue backlogs reported by major cloud providers, including Amazon, Microsoft, and Google, according to The Information. Those backlogs — which reflect future contractual commitments rather than current revenue — have ballooned as cloud companies race to lock in long-term demand from AI developers.
The scale of the spending is staggering. Anthropic had previously projected it could spend more than US$20 billion on cloud computing this year alone, while OpenAI is expected to spend about US$45 billion, up sharply from the year before. Both companies are betting that demand for AI services will grow fast enough to justify those investments.
Cloud giants double down on AI anchors
For cloud providers, the strategy is clear: invest heavily in today's AI leaders in the hope that they become anchor customers whose long-term spending far exceeds the initial outlay. Alphabet, for example, has committed up to US$40 billion to Anthropic, deepening a relationship that is both collaborative and competitive in the race to develop advanced AI systems.
Yet the concentration also introduces risk. Investors have begun to question whether such massive spending commitments will fully materialize, particularly as they depend on aggressive growth assumptions. The two leading AI start-ups have projected revenue increases of as much as 20 to 30 times by the end of the decade.
Beyond servers: a new profit playbook
Even so, the momentum is reshaping the economics of cloud computing. Providers are not only renting out servers but also earning revenue by reselling AI models to their own customers — a business that could generate billions more in high-margin income. Companies like Amazon and Google also benefit from using their own custom-designed chips, which can improve profitability compared with relying on more expensive hardware from Nvidia.
The result is an increasingly intertwined ecosystem in which cloud providers, chipmakers, and AI developers are bound together by multibillion-dollar commitments — and by a shared bet that the demand for artificial intelligence is still in its early stages.
As one industry observer put it, the AI boom may be global, but for now, much of its future hinges on the spending plans of just two companies.
Article edited by Jerry Chen




