NVIDIA and OpenAI have entered a landmark letter of intent under which the GPU maker will deliver at least 10 gigawatts of accelerated compute, equivalent to tens of millions of NVIDIA Blackwell GPUs and invest up to $100 billion in OpenAI as each gigawatt comes online. Deployment kicks off in the second half of 2026 on NVIDIA’s next-generation Vera Rubin platform, with both companies co-designing hardware, software, and data center innovations to maximize performance per watt.
NVIDIA’s commitment of up to $100 billion is organized as a series of incremental capital infusions, where each deployed gigawatt of systems unlocks the next tranche of funding. Deployment kicks off in the second half of 2026, beginning with initial rack-level clusters and steadily scaling to the full 10 GW footprint. The definitive agreement will outline the precise mix of convertible notes and equity instruments, subject to regulatory approvals and final documentation. This milestone-based structure gives OpenAI a reliable pipeline of compute resources while guaranteeing NVIDIA demand visibility for its hardware roadmap well into the next decade.
At the heart of the partnership lies the Vera Rubin platform, a turnkey solution that integrates NVIDIA’s latest Blackwell GPUs, Grace CPUs, programmable DPUs, NVLink-C2C connectivity, advanced liquid-immersion cooling, and optimized power distribution. Its modular design supports 2 MW to 4 MW deployments that can be replicated across colocation sites, with cooling architectures targeting a Power Usage Effectiveness below 1.1. An end-to-end software stack spanning CUDA, cuDNN, Megatron-LM and beyond and is co-engineered to extract maximum throughput and efficiency. By aligning on this full hardware-software stack, NVIDIA and OpenAI aim to slash both capex and opex for hyperscale AI training to unprecedented levels.
This deal elevates NVIDIA from supplier to strategic investor. In parallel with hardware delivery, NVIDIA will collaborate on benchmarking, reliability engineering, supply-chain forecasting, and sustainability planning. For OpenAI, guaranteed access to 10 GW of cutting-edge compute de-risks its roadmap for ever-larger models and foundation-model research.
Competitors and cloud incumbents such as AWS, Google Cloud, Oracle will now face intensified pressure to secure similar long-term commitments from chipmakers. NVIDIA’s gambit also sharpens the competitive divide between its own hardware ecosystem and alternatives from AMD, Intel, and emerging AI accelerator startups.
Just weeks earlier, Microsoft and OpenAI restructured their partnership to loosen exclusivity on Azure for AI compute and to redefine governance, equity stakes, and cloud commitments. By securing NVIDIA’s $100 billion compute deal, OpenAI gains leverage and diversification of capital and infrastructure.
This fresh partnership with NVIDIA likely provided the impetus for OpenAI and Microsoft to revisit their terms, freeing OpenAI to tap multiple vendors while allowing Microsoft to double down on software integration (Azure AI services, Windows Copilot, Office 365 embedding) rather than shoulder the entire hardware burden.
As NVIDIA and OpenAI gear up to deploy 10 GW-scale AI infrastructure, data-center operators will rush to integrate immersion-cooling and liquid-cooled rack designs capable of handling such massive footprints. At the same time, energy grids and colocation providers will need to rethink capacity planning to sustain continuous multi-gigawatt loads while driving Power Usage Effectiveness down toward industry-leading levels. This surge in committed compute power could accelerate research timelines for artificial general intelligence, as broader access to giant-scale training clusters enables faster iteration and experimentation. In response, other hyperscalers are likely to unveil their own long-term partnerships or innovative financing models to lock in future GPU supply and maintain competitive positioning in the AI arms race.
Final contract signing and regulatory clearances are expected in early 2026. Watch for detailed technical disclosures on Vera Rubin performance benchmarks, cooling and efficiency metrics, and the exact financing instruments NVIDIA will employ. As NVIDIA, OpenAI, and Microsoft recalibrate their roles, hardware provider, model developer, and software integrator, the AI landscape enters a new era where compute scale, investment strategy, and cross-company coordination will determine who leads the next wave of breakthroughs.


