OpenAI has signed a $38 billion deal with Amazon Web Services, ending years of exclusive reliance on Microsoft and marking a pivotal shift in its infrastructure strategy. Under the seven-year agreement announced Monday, OpenAI gains immediate access to hundreds of thousands of Nvidia GPUs across AWS data centers, with the capacity to expand as needed through 2032.
Amazon stock climbed 5% following the announcement, as investors recognized the strategic value of securing the $500 billion AI leader as a major customer. The deal represents OpenAI’s largest infrastructure commitment outside Microsoft’s Azure platform, signaling a deliberate multi-cloud strategy to avoid vendor lock-in.
“Scaling frontier AI requires a massive, reliable compute,”
OpenAI CEO Sam Altman said in Monday’s release.
“Our partnership with AWS strengthens the broad compute ecosystem that will power this next era.”
This AWS partnership crystallizes OpenAI’s infrastructure independence. Until January 2025, Microsoft held exclusive cloud provider rights, a constraint that became increasingly problematic as ChatGPT scaled to 800 million weekly users.
Last week, even Microsoft’s right of first refusal expired, freeing OpenAI to build relationships with all major cloud providers.
The timing coincides with OpenAI’s broader $1.4 trillion infrastructure buildout, which includes deals with Oracle, Google Cloud, and Broadcom. OpenAI also committed to purchase an additional $250 billion in Azure services, ensuring Microsoft remains a major partner despite losing exclusivity.
For AWS, which holds 30% of the global cloud infrastructure market, the deal addresses a competitive vulnerability. While AWS remains the market leader, its Q3 2025 growth of 20% trailed Microsoft Azure’s 40% and Google Cloud’s 34% expansion. Securing OpenAI helps AWS counter the perception that rivals are winning the AI infrastructure race. AWS CEO Matt Garman said
“The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads,”.
The infrastructure will power both ChatGPT’s real-time inference and training of next-generation models. While the current agreement focuses on Nvidia’s Blackwell chips, future phases could incorporate Amazon’s custom Trainium processors, which power Anthropic’s $11 billion data center.
This diversification reflects a broader industry pattern. OpenAI has locked in over 26 gigawatts of computing capacity across multiple vendors which is infrastructure that costs far more than current revenue supports.
The company is exploring government contracts, hardware products, and even becoming a compute supplier itself through its Stargate data centers.
For investors and enterprise customers, the AWS deal signals OpenAI’s operational maturity. By spreading infrastructure risk across providers and securing long-term capacity, OpenAI demonstrates the independence and scale required for an eventual IPO.
Altman recently acknowledged that going public is “the most likely path” given the company’s enormous capital requirements.
When it seemed that the time had finally come for Woody and Buzz to retire…
Optimism regarding AI is reflected in Foxconn, the largest electronics manufacturer and a notable partner…
Advanced Micro Devices (AMD) has announced stringent financial projections that show a strong future of…
Sony has attested to the fact that Ghost of Tsushima, the Sucker Punch Productions exclusive…
The biggest chain of cinemas in the United States, AMC Theatres, is facing serious operational…
State of Play Japan was the most recent event, which included a large number of…