Copyright SiliconANGLE News

OpenAI Group PBC will rent $38 billion worth of cloud infrastructure from Amazon Web Services Inc. as part of a seven-year partnership announced today. The news comes less than a week after the ChatGPT developer revised the terms of its relationship with Microsoft Corp. The tech giant was OpenAI’s sole cloud provider for about two years. Under the modified partnership, Microsoft no longer has the right of first refusal on OpenAI cloud contracts. The new deal with AWS will give the ChatGPT developer access to hundreds of thousands of Nvidia Corp. graphics processing units. According to the companies, OpenAI will use the chipmaker’s latest GB200 and GB300 chips. Both processors combine two graphics processing units with one central processing unit. The GB200 is also available in a supersized version, the GB200 NVL4, that includes four Blackwell GPUs and two CPUs. The GB300, in turn, is based on Nvidia’s newer Blackwell Ultra graphics card. A single Blackwell Ultra can provide about 15 petalops of performance. The chips that AWS plans to put at the disposal of OpenAI will be deployed in Amazon EC2 UltraServers. The machines are based on a set of custom components called the AWS Nitro System. One of those components is the Nitro Security Chip, which offloads certain cybersecurity tasks from an UltraServer’s main processors. OpenAI will start using AWS infrastructure immediately. The companies plan to deploy all the computing capacity included in the contract before the end of 2026. From 2027 and onwards, OpenAI will have the ability to further expand its AWS environment. “Scaling frontier AI requires massive, reliable compute,” said OpenAI Chief Executive Officer Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.” OpenAI’s largest cloud provider is Oracle Corp., which recently won a $300 billion infrastructure contract from the ChatGPT developer. The companies are building a network of U.S. data centers that will have 4.5 gigawatts of computing capacity. One gigawatt is enough to power about 750,000 homes. The Oracle-built facilities are headlined by a campus in Abilene, Texas that started coming online earlier this year. According to the company, the campus will contain 450,000 Nvidia GPUs at full capacity. OpenAI also continues to use infrastructure from Microsoft, its former exclusive cloud provider. The ChatGPT developer agreed to purchase $250 billion worth of Azure compute capacity as part of its recent reorganization. OpenAI also has infrastructure deals with CoreWeave Inc. and Google LLC. OpenAI’s best-funded startup rival, Anthropic PBC, likewise uses AWS infrastructure to power its AI models. Last week, the Amazon.com Inc. unit opened a $11 billion data center campus dedicated to running Anthropic’s training and inference workloads. The campus contains about 500,000 of AWS’ custom AWS Trainium2 chips, a number that is expected to double by year’s end. Photo: AWS