Key Facts
- OpenAI has committed to approximately $1 trillion in infrastructure partnerships throughout 2025, including deals with Nvidia, AMD, Oracle, and Broadcom
- The company plans to deploy at least 26 gigawatts of data center capacity across multiple partnerships—enough to power several million homes
- OpenAI is transitioning from cloud customer to self-hosted provider, building its own data centers to serve 700 million weekly active users
- The company is developing custom AI processors with Broadcom using Ethernet-based networking instead of Nvidia’s InfiniBand technology
- Nvidia agreed to invest up to $100 billion as OpenAI deploys Nvidia systems, with AMD granting up to 10% of its stock over time in exchange for partnership
- Five new Stargate data center sites across Texas, New Mexico, Ohio, and the Midwest will create over 25,000 jobs
OpenAI Transforms Into Cloud Infrastructure Provider
OpenAI is executing an aggressive strategy to become its own cloud infrastructure provider, moving beyond its traditional role as a customer of Microsoft Azure, Oracle, and other hosting services. This transformation addresses a fundamental constraint: the company cannot meet current demand for ChatGPT, let alone support the advanced AI models it plans to release.
The infrastructure buildout solves multiple problems simultaneously. ChatGPT Plus subscribers already encounter usage limits when accessing compute-intensive features like image generation or advanced reasoning models. Free users face even stricter restrictions. By controlling its own data centers, OpenAI can eliminate these bottlenecks while reducing dependency on third-party cloud providers whose pricing and capacity allocation may not align with the company’s growth trajectory.
Massive Data Center Deployments Address Capacity Crisis
The Stargate project, developed with Oracle and SoftBank, represents OpenAI’s largest infrastructure commitment. The initiative now encompasses six major sites across the United States with nearly 7 gigawatts of planned capacity and over $400 billion in investment over three years.
Three locations will be developed through the OpenAI-Oracle partnership: Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed Midwest location. These sites, combined with a 600-megawatt expansion near the flagship campus in Abilene, Texas, can deliver over 5.5 gigawatts of capacity. The Abilene campus is already operational, with Oracle delivering Nvidia hardware in June. OpenAI has begun training new models and running ChatGPT inference operations from this facility.
SoftBank will develop two additional sites with OpenAI. The Lordstown, Ohio location, where construction has already started, should be operational next year. A second site in Milam County, Texas, developed with SB Energy, will follow. These two facilities may scale to 1.5 gigawatts over 18 months.
The companies reviewed over 300 proposals from more than 30 states before selecting these locations. The July agreement between OpenAI and Oracle to develop up to 4.5 gigawatts of additional Stargate capacity represents a partnership worth over $300 billion across five years.
Strategic Hardware Partnerships Diversify Supply Chains
Sam Altman, CEO at OpenAI. Image credit: Steve Jurvetson via Flickr, CC BY 2.0 license
OpenAI secured a landmark agreement with Nvidia to deploy at least 10 gigawatts of Nvidia systems representing millions of GPUs. To support this deployment, including data center and power capacity, Nvidia intends to invest up to $100 billion in OpenAI progressively as each gigawatt comes online. The first phase targets deployment in the second half of 2026 using Nvidia’s Vera Rubin platform.
“NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT,” said Jensen Huang, founder and CEO of Nvidia. “This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.”
This represents Nvidia’s first direct sales relationship with OpenAI. Previously, OpenAI accessed Nvidia hardware exclusively through cloud providers like Microsoft Azure, Oracle, and CoreWeave. Huang explained that these direct sales, which include systems and networking beyond just GPUs, are intended to prepare OpenAI for when it becomes its own self-hosted hyperscaler operating its own data centers.
The AMD partnership takes a different approach. AMD agreed to grant OpenAI large tranches of stock—up to 10% of the company over several years, contingent on factors including stock price increases. In exchange, OpenAI will use and help develop AMD’s next-generation AI GPU chips. This structure makes OpenAI a shareholder in AMD, while Nvidia’s investment makes it a shareholder in OpenAI.
The AMD deal encompasses 6 gigawatts of data center capacity. Combined with the Nvidia partnership, Stargate commitments, and European expansions including “Stargate UK,” OpenAI has secured approximately $1 trillion worth of infrastructure agreements in 2025 alone.
Custom Silicon Development Reduces Vendor Dependency
OpenAI partnered with Broadcom to co-develop and deploy its first in-house AI processors, marking a significant step toward vertical integration. The multi-year collaboration will deploy 10 gigawatts of OpenAI-designed accelerators and Broadcom’s Ethernet-based networking systems starting in 2026.
“By designing its own chips and systems, OpenAI can embed what it’s learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence,” the two companies stated. The racks will be scaled entirely with Ethernet and other connectivity solutions from Broadcom, meeting surging global demand for AI with deployments across OpenAI’s facilities and partner data centers.
The decision to rely on Broadcom’s Ethernet fabric rather than Nvidia’s InfiniBand interconnects signals OpenAI’s intent to build a more open and scalable networking backbone. This choice aligns with broader industry momentum toward open networking standards that deliver flexibility and interoperability.
“OpenAI’s choice signals a shift toward more open, cost-efficient, and scalable architectures,” said Charlie Dai, VP and principal analyst at Forrester. “Ethernet offers broader interoperability and avoids vendor lock-in, which could accelerate the adoption of disaggregated AI clusters.”
Lian Jye Su, chief analyst at Omdia, noted that the decision reflects a future of AI workloads running on heterogeneous computing and networking infrastructure. “While it makes sense for enterprises to first rely on Nvidia’s full stack solution to roll out AI, they will generally integrate alternative solutions such as AMD and self-developed chips for cost efficiency, supply chain diversity, and chip availability,” Su said.
How Self-Hosted Infrastructure Improves AI Access
Becoming its own cloud provider allows OpenAI to optimize infrastructure specifically for AI workloads rather than adapting to general-purpose cloud services. This specialization can reduce latency, improve response times, and enable features that might be impractical when constrained by third-party infrastructure limitations.
Direct control over hardware deployment means OpenAI can prioritize capacity allocation based on actual user needs rather than negotiating resources with cloud providers who serve multiple customers. This is particularly important for training next-generation models, a process requiring thousands of specialized chips running continuously for months.
The scale of deployment—26 gigawatts across multiple partnerships—provides headroom for growth. Ten gigawatts equals roughly the output of 10 nuclear reactors, enough electricity to power millions of homes. This capacity addresses both current demand from 700 million weekly users and future requirements as more advanced models become available.
Sam Altman emphasized this point: “Everything starts with compute. Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale.”
Financial Structure Raises Sustainability Questions
The financial arrangements underlying these partnerships have drawn scrutiny. Nvidia’s investment structure involves the chip manufacturer investing up to $100 billion in OpenAI as the company deploys Nvidia systems—essentially funding purchases from itself. Oracle follows a similar pattern, with a reported $30 billion per year deal where Oracle builds facilities that OpenAI pays to use.
AMD’s stock grant arrangement adds another dimension to these circular investments. Rather than OpenAI paying cash for AMD chips, AMD grants equity in exchange for OpenAI using and helping develop its technology.
Huang acknowledged the financial constraints openly, stating that OpenAI doesn’t “have the money yet” to pay for all this equipment. He estimated that each gigawatt of AI data center will cost OpenAI “$50 to $60 billion” to cover everything from land and power to servers and equipment.
The Information reported that Nvidia is discussing leasing its chips to OpenAI rather than selling them outright. Under this structure, Nvidia would create a separate entity to purchase its own GPUs, then lease them to OpenAI, adding another layer of financial complexity.
Critics describe these arrangements as circular investments where infrastructure providers invest in AI companies that become their biggest customers. As Bryn Talkington of Requisite Capital Management told CNBC: “Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia.”
OpenAI’s revenue currently falls far short of supporting a trillion dollars in commitments, though it is growing rapidly, reportedly hitting $4.5 billion in the first half of 2025. The company is betting that future models will be significantly more capable, driving demand that justifies current infrastructure spending.
More Partnerships Expected Soon
CEO Sam Altman indicated that OpenAI’s dealmaking continues. When asked about recent agreements during an a16z Podcast interview, Altman said, “You should expect much more from us in the coming months.”
Altman explained that OpenAI sees its future models and upcoming products as much more capable, fueling substantially more demand. “We have decided that it is time to go make a very aggressive infrastructure bet,” he said.
The CEO acknowledged that achieving this scale requires industry-wide participation. “To make the bet at this scale, we kind of need the whole industry, or big chunk of the industry, to support it. And this is from the level of electrons to model distribution and all the stuff in between, which is a lot. So we’re going to partner with a lot of people,” Altman said.
Greg Brockman, co-founder and President of OpenAI, added: “We’ve been working closely with NVIDIA since the early days of OpenAI. We’ve utilized their platform to create AI systems that hundreds of millions of people use every day. We’re excited to deploy 10 gigawatts of compute with NVIDIA to push back the frontier of intelligence and scale the benefits of this technology to everyone.”
Despite warnings of an AI bubble—Altman himself warned last month that “someone will lose a phenomenal amount of money”—the company maintains confidence in its strategy. “I’ve never been more confident in the research road map in front of us and also the economic value that will come from using those [future] models,” Altman stated.
Even if AI demand fails to meet projections, the physical infrastructure won’t disappear. When the dot-com bubble burst in 2001, fiber optic cable laid during boom years eventually found use as Internet demand caught up. Similarly, these facilities could potentially serve cloud services, scientific computing, or other workloads, though possibly at substantial losses for investors who paid AI-boom prices.
The infrastructure push represents OpenAI’s attempt to remove barriers between its technology and users. Whether these massive investments prove justified depends on whether future AI capabilities match the company’s projections and whether enough customers are willing to pay for them at scale.
If you are interested in this topic, we suggest you check our articles:
- Cleverly Intelligent Systems Integration in AI Infrastructure Projects
- Is Sam Altman Right About ChatGPT Pulse?
- Trump Administration Unveils $500 Billion Stargate Initiative for AI Infrastructure
Sources: TechCrunch, OpenAI, NetworkWorld, ArsTechnica
Written by Alius Noreika