Microsoft is partnering with Australian data center operator IREN to deploy its $9.7 billion worth of NVIDIA GPUs, accelerating its Copilot and Azure AI infrastructure buildout.
Microsoft has committed another $9.7 billion to secure NVIDIA’s cutting-edge GB300 GPUs, a move that underscores the intensifying AI arms race among hyperscalers and the growing demand for compute power. Rather than building new data centers from scratch, Microsoft is partnering with IREN, an Australia-based AI cloud infrastructure provider, to rapidly deploy these chips at scale.
This investment is designed to support the explosive growth of Microsoft’s Copilot AI assistant across Windows, Microsoft 365, GitHub, and Azure. These services rely on NVIDIA’s GB300 architecture, which delivers massive performance gains for training and inference of large language models.
Eric Boyd, Corporate VP of Azure AI, noted, “We’re seeing incredible demand for AI services, and we’re scaling aggressively to meet it.” That scaling is now being outsourced to IREN, which will host the GPUs in its liquid-cooled data centers in Childress, Texas, offering up to 200 megawatts of critical IT capacity.
The $9.7 billion figure includes a 20% prepayment from Microsoft to IREN, enabling the company to purchase servers and infrastructure through a separate $5.8 billion deal with Dell Technologies. This approach allows Microsoft to bypass the slow and costly process of building new facilities, while also avoiding the risk of hardware obsolescence in a fast-moving chip market.
NVIDIA’s GB300 chips are the backbone of this deal, offering unprecedented memory bandwidth and compute performance. Microsoft’s investment ensures priority access to these GPUs, which are in short supply globally. By locking in this capacity through IREN, Microsoft secures its position at the forefront of AI infrastructure.
IREN, which pivoted from crypto mining to AI cloud services, is emerging as a key player in what some analysts are calling the “neocloud” sector, specialized infrastructure providers built for high-density AI workloads.
In many ways, the current AI boom resembles a closed-loop economy among tech giants, where companies like Microsoft, NVIDIA, OpenAI, Google, Amazon, and newer entrants like Sora are effectively recycling capital to sustain momentum and market confidence. Microsoft pours billions into NVIDIA’s GPUs to power its AI infrastructure. NVIDIA, in turn, sees its valuation soar, which justifies further investment from cloud providers and enterprise customers. OpenAI benefits from Microsoft’s infrastructure and funding, while simultaneously driving demand for more compute. Google and Amazon follow suit, investing heavily in their own chips and models to stay competitive, often partnering with or acquiring startups that feed back into the same ecosystem.
This cycle creates a feedback loop where inflated budgets and strategic partnerships serve to validate each other’s valuations and future revenue projections. The result is a kind of performative profitability—where stock prices and investor sentiment are buoyed not by realized profits from AI products, but by the promise of future dominance. Until generative AI tools like Copilot, Gemini, or Claude deliver consistent, enterprise-scale returns, much of the sector’s financial optimism rests on projected value rather than proven monetization. In essence, the AI economy is still in its speculative phase, and the giants are keeping the music playing by passing the same chips, models, and dollars around the room.


