OpenAI today announced plans to build five new data center sites in the U.S. as part of its Stargate initiative.
Launched in January, Stargate is intended to provide the artificial intelligence provider with 10 gigawatts’ worth of computing infrastructure. The project is expected to cost $500 billion over four years.
OpenAI is partnering with Oracle Corp. on three of the upcoming data center sites. The facilities will be built in Shackelford County, Texas; Doña Ana County, New Mexico; and a yet-unspecified Midwest location. OpenAI plans to share more details about the latter project in the near future.
According to the New York Times, Oracle will finance the three data center sites and oversee their construction. The database maker reportedly hopes to cover some of the project’s costs through “new kinds of financial deals with various partners.” That hints it could bring external investors aboard.
The collaboration is part of a $300 billion cloud infrastructure deal that OpenAI signed with Oracle earlier this month. According to the Wall Street Journal, the database maker expects to start generating revenue from the contract in 2027.
OpenAI will build the two other data center campuses it previewed today through a partnership with SoftBank Group Corp., one of its largest investors. The Japanese tech giant led a $40 billion round for the ChatGPT developer in March.
The first site is located in Lordstown, Ohio. SoftBank broke ground earlier this year and expects to bring the data center online in 2026. The second campus will be developed by the company’s SB Energy infrastructure business in Milam County, Texas. OpenAI says the two sites can be equipped with 1.5 gigawatts’ worth of compute infrastructure within 18 months.
The facilities will join a data center campus in Abilene, Texas that Oracle started building for the ChatGPT developer earlier this year. OpenAI detailed today the facility’s first server racks came online in June. That hardware is already powering AI training and inference workloads.
The server racks currently installed in the Abilene site are powered by Nvidia Corp.’s GB200 chip. It combines two Blackwell B200 graphics processing units with a central processing unit. Oracle reportedly expects to install more than 64,000 GB200 chips in the data center by next March.
OpenAI will also use newer Nvidia silicon. Earlier this week, it announced plans to adopt the chipmaker’s upcoming Vera Rubin chip, which includes an 88-core CPU and a GPU based on the next-generation Rubin architecture. OpenAI disclosed the plan in conjunction with the news that it will raise up to $100 billion from Nvidia to finance data center construction projects.
Last month, Nvidia Chief Executive Officer Jensen Huang stated that building 1 gigawatt of AI infrastructure costs $50 billion to $60 billion. He said that the chipmaker’s hardware accounts for well over half that sum.
In addition to the five data centers announced today, OpenAI may build a 600-megawatt site near its Abilene campus. The facilities will have a combined power draw of over 5.5 gigawatts. OpenAI expects the projects to create more than 25,000 jobs onsite and tens of thousands more nationwide.