Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking
Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking
Homepage   /    business   /    Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking

Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking

🕒︎ 2025-11-03

Copyright Benzinga

Amazon's $38B OpenAI Deal Proves NVIDIA's Monopoly Is Already Breaking

The fact that OpenAI needed $660 billion across five cloud providers to avoid vendor lock-in tells you everything about NVIDIA’s pricing power problem Amazon.com Inc’s (NASDAQ:AMZN) $38 billion OpenAI deal sent AMZN shares up 5% and NVIDIA Corporation (NASDAQ:NVDA) up 3% this week. Wall Street interpreted the announcement as validation that AWS could compete in the AI infrastructure race. The narrative seemed simple: OpenAI secures hundreds of thousands of cutting-edge NVIDIA GPUs, and Amazon solidifies its AI cloud position. But investors are missing a critical detail from five days earlier. Amazon revealed that Anthropic (OpenAI’s biggest rival and a company Amazon invested $8 billion in) is now running on 500,000 of Amazon’s custom Trainium2 chips, scaling to over 1 million chips by year-end. While OpenAI commits to NVIDIA’s premium GPUs, Anthropic is demonstrating that Amazon’s custom silicon can train frontier AI models at a fraction of the cost. If that bet succeeds, it fundamentally reshapes who controls AI infrastructure economics and threatens NVIDIA’s $5 trillion market cap. The Tale of Two Strategies: Why Amazon Is Playing Both Sides Amazon is executing a dual strategy: Strategy A (OpenAI): Sell NVIDIA GPUs through AWS cloud services. Amazon captures infrastructure revenue, but NVIDIA keeps the fat profit margins on the chips themselves. Strategy B (Anthropic): Deploy Amazon’s custom Trainium2 chips. Amazon captures both the infrastructure revenue and the chip margins, cutting NVIDIA out entirely. AWS claims Trainium2 delivers 30-40% better price-performance than GPU-based instances for training workloads. For a company like Anthropic spending billions annually on compute, that translates to hundreds of millions in savings. Anthropic’s revenue grew from approximately $1 billion at the beginning of 2025 to over $5 billion by August, powered largely by this cost advantage. Amazon wins either way. If OpenAI succeeds with NVIDIA’s ecosystem, AWS books $38 billion in revenue. If Anthropic succeeds with Trainium2, Amazon proves custom silicon can compete and every other AI lab will demand the same economics. Idea behind Amazon’s position. Whether OpenAI succeeds with Nvidia GPUs or Anthropic proves viable, Amazon wins The Technical Bet That Changes Everything What makes Anthropic’s deployment remarkable is that Trainium2 is specifically optimized for Anthropic’s reinforcement learning workloads, which are more memory-bandwidth-bound than raw compute-bound. NVIDIA builds Ferrari engines designed for maximum horsepower. Anthropic needed fuel-efficient trucks optimized for long-haul routes. Amazon built exactly that, and Anthropic was heavily involved in the chip design process, essentially using Amazon’s Annapurna Labs as a custom silicon partner. This is the same hardware-software co-design strategy that made Apple Inc’s (NASDAQ:AAPL) M-series chips so dominant. Alphabet Inc’s (NASDAQ:GOOGL) Google pioneered it with TPUs for DeepMind. Now Anthropic and Amazon are executing it at massive scale. Project Rainier delivers five times the compute power Anthropic used for previous model generations. As Ron Diamant, AWS vice president and engineer told reporters: “When we build our own devices, we get to optimize across the entire stack to really compress engineering time and the time to get to massive scale”. The NVIDIA Vulnerability Markets Are Missing For two decades, NVIDIA’s moat was CUDA, the proprietary software ecosystem that made switching away prohibitively expensive. Developers spent years mastering CUDA. Rewriting production code for alternative chips meant months of engineering work. But that lock-in is breaking. OpenAI’s Triton compiler and frameworks like PyTorch 2.0 now allow developers to write code that runs on both NVIDIA GPUs and competing chips without modification. The switching cost that once measured in millions is becoming a six-month engineering project. More critically: Leading AI cloud providers, including Amazon and Google, have ramped up their in-house chip efforts rather than relying on NVIDIA. This represents systematic replacement. When your two largest customers (Microsoft Corporation (NASDAQ: MSFT) and Amazon, representing 39% of NVIDIA’s revenue according to recent SEC filings) are building competing alternatives, you face a structural reset. This timeline projects Nvidia’s adoption declining from 95% (2024) to 60% (2027) while Trainium2 grows from near-zero to 38% market share, with 40-50% cost advantages OpenAI’s Multi-Cloud Escape Plan OpenAI’s true strategy reveals itself in the numbers: The company now has commitments with Microsoft ($250 billion), Oracle Corporation (NYSE:ORCL) ($300 billion), Google (tens of billions), AWS ($38 billion), and CoreWeave ($22.4 billion). Over $660 billion in total infrastructure spending. Until recently, Microsoft had exclusive cloud partnership rights with OpenAI. Last week, those exclusivity provisions expired. Days later, OpenAI signed with Amazon. This is about breaking free from any single vendor’s pricing power, especially NVIDIA’s. By distributing workloads across clouds with different hardware ecosystems, OpenAI gains access to NVIDIA GPUs, Google’s TPUs, AWS’s Trainium chips, and future custom silicon from Broadcom Inc (NASDAQ:AVGO). When your infrastructure commitments exceed $1.4 trillion and you’re burning $8-10 billion annually, vendor lock-in becomes existential risk. As Mike Krieger, Anthropic’s chief product officer, told CNBC: “There is such demand for our models that I think the only way we would have been able to serve as much as we’ve been able to serve so far this year is this multi-chip strategy”. Translation: The AI labs have figured out that dependence on NVIDIA’s pricing power is unsustainable. Custom silicon is already here. The Circular Economy Problem That Could Sink Everything While Amazon announces the OpenAI deal, there’s an uncomfortable truth underneath: A significant portion of AI infrastructure “demand” is circular. Amazon invested $8 billion in Anthropic. Anthropic uses AWS infrastructure. AWS revenue grows, justifying Amazon’s massive capex. That capex validates AI infrastructure investments, attracting more customers, perpetuating the cycle. Similarly, OpenAI pays AWS $38 billion for infrastructure. AWS uses that revenue to build more data centers and develop Trainium3. OpenAI’s ability to deploy $1.4 trillion in infrastructure commitments justifies its $500 billion valuation, which attracts investor capital, which funds more infrastructure deals. Wall Street analysts have become concerned about recent circular deals among leading artificial intelligence companies. AI infrastructure providers like Amazon and NVIDIA have invested in their customers, who then turn around and buy more of their products. As Jeremy Grantham’s firm GMO warned, this looks eerily similar to Cisco Systems Inc (NASDAQ:CSCO) in the late 1990s. Lending money to startups to buy Cisco routers, then booking those sales as revenue. When the bubble popped, Cisco lost 78% of its value. The critical question: How much of AWS’s 20% growth is organic customer demand versus circular ecosystem revenue from companies Amazon has invested billions in? If 15-20% of AWS growth comes from circular deals, the organic growth rate might actually be 12-15%. Still healthy, but dramatically different from headline numbers. The Cascade Scenario Worth Watching The systemic risk Wall Street analysts aren’t modeling: AI productivity gains disappoint or arrive slower than expected Infrastructure utilization drops from 95% to 60-70% OpenAI and Anthropic can’t sustain $1.4 trillion in combined commitments AWS, Azure, and Oracle face revenue shortfalls NVIDIA GPU demand craters (stock currently trades at 50x earnings) Circular financing breaks. The ecosystem can no longer prop up interdependent valuations AI infrastructure becomes stranded assets, forcing writedowns across the sector As the Brookings Institution warned, if AI productivity gains are “limited or delayed, a sharp correction in tech stocks, with negative knock-ons for the real economy, would be very likely.” Why Amazon Still Wins (Even If The Bubble Deflates) Despite the circular financing concerns, Amazon is positioned better than almost anyone: 1. Immediate revenue recognition: OpenAI is accessing AWS capacity immediately and paying now, not deferred over seven years. 2. Hardware optionality: By supporting both NVIDIA (OpenAI) and custom silicon (Anthropic), Amazon wins regardless of which architecture dominates. 3. Customer diversification: Unlike Microsoft, which is heavily dependent on OpenAI’s success, Amazon has broader enterprise cloud business plus Anthropic as a hedge. 4. Infrastructure execution: As Krieger noted, “These deals all sound great on paper, but they only materialize when they’re actually racked and loaded and usable by the customer. And Amazon is incredible at that”. AWS added more than 3.8 gigawatts of power capacity in the past 12 months (more than any other cloud provider) and plans to double total capacity by 2027. What Investors Should Watch The $38 billion deal is real, but sustainability depends on whether the circular financing can continue and whether AI delivers the productivity gains that justify a trillion dollars in infrastructure spending. Three critical signals: 1. Anthropic’s Trainium2 success metrics Anthropic recently launched a “latency-optimized mode” for Claude 3.5 Haiku that runs 60% faster on Trainium2. If Anthropic can train and deploy frontier models on custom chips at half of NVIDIA’s cost, the entire GPU premium pricing structure collapses. 2. OpenAI’s path to profitability The company is burning $8-10 billion annually with total projected burn of $115 billion through 2029. If there’s no clear path to positive cash flow by 2028, these massive infrastructure commitments become unsustainable. 3. AWS organic growth decomposition Monitor how much of AWS’s 20% growth comes from OpenAI and Anthropic versus traditional enterprise customers. If it exceeds 20% from the circular ecosystem, the quality of growth is suspect. The Investment Implications Amazon’s $38 billion OpenAI deal represents the opening move in a war over who controls AI infrastructure economics. On one side: OpenAI with NVIDIA GPUs, representing the status quo where hyperscalers pay premium prices to the chip monopoly. On the other: Anthropic with Trainium2, representing a future where hyperscalers build cost-efficient custom silicon and reclaim pricing power. For NVIDIA shareholders, the signs are clear. The company will remain dominant and profitable, but it’s transforming from “irreplaceable monopoly” to “leading semiconductor company with normalizing margins.” When that perception shift completes (likely within 12-18 months), NVIDIA’s valuation multiple will compress from 50x earnings to 25-30x. For Amazon shareholders, this dual strategy (serving both the NVIDIA ecosystem and the custom silicon revolution) positions AWS as the Switzerland of AI infrastructure. That strategic ambiguity works in their favor. The AI revolution is real. But the winners are being determined not by who builds the best models, but by who controls the most cost-efficient infrastructure to run them. Right now, that battle is just beginning. And Monday’s $38 billion deal was never really about OpenAI and Amazon at all. It was always about the war to break NVIDIA’s monopoly, one custom chip at a time.

Guess You Like

Cherry Creek office building sells for big gain
Cherry Creek office building sells for big gain
Five years ago, Brad Cummings ...
2025-11-04