All Roads Lead To NVIDIA: Bankrolling Its Own AI Gold Rush
All Roads Lead To NVIDIA: Bankrolling Its Own AI Gold Rush
Homepage   /    technology   /    All Roads Lead To NVIDIA: Bankrolling Its Own AI Gold Rush

All Roads Lead To NVIDIA: Bankrolling Its Own AI Gold Rush

Contributor,Roomy Khan 🕒︎ 2025-11-03

Copyright forbes

All Roads Lead To NVIDIA: Bankrolling Its Own AI Gold Rush

Nvidia headquarters Getty Images NVIDIA just wrote a $5 billion check for a stake in Intel. It's the AI gold rush in microcosm: the biggest winner is now bankrolling its own boom. The Intel deal is one small piece of an extraordinary surge. In 2025, through October, nearly one trillion dollars in AI infrastructure commitments has surfaced: $500 billion from Stargate alone; approximately $150 billion in NVIDIA-driven strategic and supply commitments; and a cascade of hyperscaler, private-equity, and sovereign GPU procurement moving at unprecedented velocity. This isn't a tech cycle. It's a capital reordering. And it reveals a fundamental power shift—the company at the center of the AI buildout is now funding it. Every boom has two sides: those writing checks and those cashing them. The spenders? Microsoft, Meta, Google, and Amazon. Over $750 billion in datacenter spending between 2023 and 2025. Nearly $400 billion in 2025 alone. In recent earnings calls on October 29–30, 2025, all four signaled plans to increase spending materially in 2026. And the investor base is multiplying. Private equity firms are raising billion-dollar AI infrastructure funds. Investment banks are structuring datacenter acquisitions. Sovereign wealth funds are locking in GPU capacity years in advance. The recipients span the entire infrastructure stack: GPUs, memory, networking, cooling, and power. Plenty of companies are winning on volume. But one company captured the lion's share—NVIDIA. 80–95% of the AI accelerator market. 70–80% gross margins. Between 2023 and 2025, revenue surged from $27B to $130B while market value exploded nearly tenfold. NVIDIA didn't just participate in the AI boom. It won. Now, with a war chest built from these sales, NVIDIA is deploying capital strategically to control both supply and demand in the infrastructure buildout. Unlike traditional VCs who simply take equity stakes, NVIDIA takes equity and turns portfolio companies into GPU customers locked into its ecosystem. The pattern repeats across the portfolio. In July 2023, a $50 million investment in Recursion Pharmaceuticals enabled the biotech firm to build BioHive-2, an NVIDIA DGX SuperPOD supercomputer with 504 H100 GPUs, completed in May 2024. By mid-2025, Perplexity AI, backed by NVIDIA since April 2024, was likely deploying Blackwell GPUs for its AI search engine, consistent with NVIDIA’s strategy of providing early access to its latest hardware. MORE FOR YOU The loop is elegant: NVIDIA Invests Capital and Takes Equity → Portfolio Companies Buy GPUs → Additional GPU Revenue → Portfolio Equity Appreciates → Larger War Chest → Repeat. Each investment converts to infrastructure deployment. Each partnership locks in ecosystem adoption, creating a reinforcing cycle where NVIDIA effectively funds its own customer base. The Spending Explosion On March 10, 2025, Oracle CEO Safra Catz announced a figure on an earnings call that stunned analysts: $48 billion in new cloud contracts signed in a single quarter, the largest booking quarter in the company's history. The backlog hit $130 billion, up 63% year-over-year. Then came the kicker: those numbers didn't even include Stargate. The $500 billion OpenAI-Oracle-SoftBank megaproject announced two months earlier wasn't included in the reported backlog. By late September, Stargate had committed more than $400 billion and secured seven gigawatts of capacity. An even larger wave followed. Between mid-September and late October 2025, the AI infrastructure market detonated with hundreds of billions more in commitments. Anchoring the blitz: NVIDIA and OpenAI's up to $100 billion letter of intent for 10 gigawatts of compute, with the first gigawatt online mid-2026. Flanking it: CoreWeave signing $36.6 billion in five days—$22.4 billion with OpenAI, $14.2 billion with Meta. AMD locking in 6 gigawatts with OpenAI. Broadcom securing another 10 gigawatts of custom ASICs starting late 2026. NVIDIA is directing capital to reinforce its AI ecosystem moat. $5 billion into Intel, $2 billion into xAI's Colossus 2 supercomputer, $1 billion for a Nokia stake. Then came the deal that revealed the whole game: a BlackRock-led consortium, including NVIDIA, Microsoft, and xAI, spending $40 billion to acquire Aligned Data Centers. It wasn't a traditional tech acquisition—it was a buy-the-grid play. They weren't buying a business; they were buying power, land, and priority access to the electrical future. This isn’t a spending cycle. It's an arms race for computational sovereignty—and the infrastructure is sold out before it's even built. The Deal That Explains Everything: CoreWeave The numbers are staggering, yet the mechanics are variations on a single, ruthlessly efficient playbook. Flywheel one—equity appreciation: In April 2023, NVIDIA invested $100 million in CoreWeave at a $2 billion valuation. Added another $250 million at the March 2025 IPO at $40 per share. Total investment: $350 million. CoreWeave now trades around $135 per share. NVIDIA's stake: roughly $3.3 billion. Nearly 10× on its initial position. As CoreWeave lands customer contracts, that equity value climbs. Flywheel two—GPU revenue: To fulfill those contracts, CoreWeave buys hundreds of thousands of NVIDIA GPUs. Each purchase generates chip revenue for NVIDIA. The rising revenue and equity value help enable more investments, accelerating the loop. The structure is nearly bulletproof. Heads, NVIDIA wins. Tails, NVIDIA still wins. What's unusual about the CoreWeave relationship: NVIDIA has committed to buy back $6.3 billion in capacity through 2032 if CoreWeave can't fill its data centers. Most cloud providers assume full inventory risk on multi-billion dollar GPU purchases. NVIDIA backstopped its own customer. Not everyone buys it. "There's a lot more that can go wrong with Nvidia than can go right," says Jay Goldberg, an analyst with Seaport Global Securities. NVIDIA did not respond to requests for comment. AMD's High-Stakes Bet AMD's revenue grew from $22.7 billion in 2023 to $25.8 billion in 2024, on track for roughly $32-34 billion in 2025. A 45% two-year climb, respectable but modest against an AI boom that quintupled NVIDIA's revenue. AMD had been scrambling to break into the AI GPU market. In October, it finally got its break. AMD announced a 6-gigawatt deal with OpenAI, its biggest contract ever. The price of landing a transformational AI customer? AMD issued OpenAI warrants for up to 160 million shares at a penny each, potentially giving OpenAI 10% of AMD's equity just to become a supplier. The warrants vest in tranches as OpenAI deploys gigawatts and as AMD's stock hits targets escalating to $600 per share. UBS analyst Timothy Arcuri calculated the stake could reach $100 billion if OpenAI holds through completion, though he expects OpenAI will likely sell shares along the way to finance GPU purchases. AMD is essentially financing OpenAI's purchases with its own equity. But AMD isn't just buying in. It's building a genuine alternative. The Helios rack-scale platform matches NVIDIA's Vera Rubin spec-for-spec: 72 GPUs, integrated networking, turnkey deployment, plus 50% more memory at 31TB HBM4. Both systems target mass production in late 2026. AMD projects a $500 billion datacenter AI accelerator TAM by 2028, a conservative estimate given $400 billion in big tech AI capex this year alone. AMD has competitive technology and a massive market opportunity. What it lacks is time and ecosystem lock-in. NVIDIA buys into customers; AMD dilutes to get them. That asymmetry shows NVIDIA's chokehold on the market. The ASIC Alternative: Broadcom's Real Threat Broadcom is playing a different game and winning. Revenue jumped from $35.8 billion in 2023 to $51.6 billion in 2024, on track for roughly $60 billion in 2025. That's a 67% surge as hyperscalers embraced custom ASICs for high-volume inference. On October 13, the company secured 10 gigawatts of custom OpenAI-designed accelerators starting late 2026. It already has design wins generating billions in revenue across Google, Meta, and other hyperscalers. These are application-specific integrated circuits (ASICs), custom chips built for one job—high-volume inference. They're dramatically cheaper and more power-efficient than GPUs for repetitive tasks like answering ChatGPT queries. This is real business taking real market share. The tradeoff? ASICs are appliances, not platforms. A Broadcom chip designed for OpenAI can't run Meta's models. It can't adapt when workloads evolve. However, this matters less than NVIDIA hopes. For mature, stable inference at massive scale, ASICs win on economics. Broadcom isn't a curiosity. It's a genuine competitive threat to NVIDIA's inference dominance. But none of that matters if the grid does not come through; the bottleneck is electricity, not silicon. Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications,Data center cooling,server room,3d rendering Power: The Infrastructure Crisis AI datacenters currently consume approximately 6–8 gigawatts globally. A single gigawatt of electricity—roughly the output of one nuclear reactor running continuously—can power a large AI datacenter. The September OpenAI/NVIDIA deal requires 10 gigawatts. The math is straightforward. At a modern power usage effectiveness (PUE) of ~1.3, a 10-gigawatt facility delivers roughly 7.5 GW of usable compute power. With ~80% dedicated to GPUs—the standard for next-generation AI datacenters—nearly 6 GW goes to accelerators. At ~1 kilowatt per Blackwell-class GPU, this translates to ~6 million chips across ~83,000 racks. OpenAI builds the datacenter buildings, cooling systems, and networking infrastructure—but it does not generate the power. The grid must deliver the electricity, equivalent to ~10 nuclear reactors running 24/7. The grid must deliver the electricity, equivalent to ~10 nuclear reactors running 24/7. For grid operators, the surge is unprecedented. “I have not seen this degree of change in a forecast in my career, and no one has seen these nationwide growth levels since the 1980s,” says Rob Gramlich, president of Grid Strategies. The U.S. grid adds roughly ~10–15 gigawatts of new capacity per year. Meeting the ~44–51 GW requirement by 2026 means building 3–5× faster—just for AI datacenters. High-voltage transmission lines take 3–5 years to build, even before community battles and permitting delays. California’s grid operator says datacenter upgrades won’t finish until late 2027. In Northern Virginia, two-thirds of Loudoun County substations face delays through 2026. Small modular reactors won’t arrive until the 2030s. Natural-gas plants are faster but face infrastructure constraints. The deals promise power years before the grid can supply it. Should Anyone Worry? Even if the power arrives on schedule, demand itself could evaporate. A structural shift toward cheaper, distributed, and more efficient AI could erode the very economics this boom depends on. If inference improves on lower-cost infrastructure, and if specialized architectures—from domain-specific ASICs to emerging analog accelerators—deliver order-of-magnitude efficiency gains for targeted workloads, the rationale for premium datacenter GPU infrastructure weakens. The $180 billion in deals assumes AI remains proprietary, centralized, and compute-intensive. History rarely cooperates. Technology cycles bend toward efficiency, commoditization, and distribution. Regulators compound the uncertainty. NVIDIA’s model binds capital, supply, and demand into a single reinforcing loop — a structure increasingly under scrutiny in cloud-AI partnerships. In January 2024, the FTC opened a formal Section 6(b) inquiry into AI investment and cloud-compute arrangements, compelling major firms to disclose terms around preferential access, exclusivity, and required cloud-spend. A follow-on FTC report in early 2025 highlighted concerns that equity-linked compute access and tied-spend agreements can reinforce incumbency and limit competition. If regulators conclude these arrangements distort market access, NVIDIA’s flywheel could meet friction. Jensen Huang has built the most elegant flywheel in modern capitalism. But flywheels spin both ways. The loops that accelerated gains can amplify pain when the cycle turns. For investors and strategists, the implications are clear: NVIDIA has built structural advantages that extend far beyond chip performance. Its capital deployment strategy creates switching costs and ecosystem lock-in that competitors can't easily replicate. But the same feedback loops that drive exponential growth in boom times can accelerate downturns. The questions aren't whether NVIDIA dominates today—it does—but whether power infrastructure can support the buildout, whether demand justifies the scale, and whether regulators will allow the flywheel to continue spinning unchecked. The answers will determine whether this is the decade's most prescient capital allocation or its most spectacular miscalculation. Editorial StandardsReprints & Permissions

Guess You Like

Man held for uploading morphed images of woman
Man held for uploading morphed images of woman
The Chikkamagaluru police on W...
2025-10-30