Artificial intelligence startup Modular Inc. said today it has raised $250 million in its third financing round, valuing the company at $1.6 billion.
The round was led by Thomas Tull’s US Innovative Technology fund, with DFJ Growth joining. All existing investors participated, including Google Ventures, General Catalyst and Greylock Ventures. The funding brings the total raised by the company to $380 million.
Founded in 2022, Modular provides a platform that allows developers to run AI applications across different computer chips — including central processing units, graphics processing units, application-specific integrated circuits and custom silicon — without the need to rewrite or migrate code.
Over the past three years, the company has built a software infrastructure layer and a specialized programming language designed to let enterprises deploy AI models across a mix of chips and servers.
Modular’s long-term goal is to provide AI deployment for enterprise users, addressing the current fragmented ecosystem that necessitates writing specialized code for each specific architecture. The company’s platform is an enterprise-grade AI inference stack that abstracts away hardware.
“When we founded Modular, we believed that the world needed a unified platform for AI, and today, that vision is more important than ever,” Chief Executive Chris Lattner said in a statement.
Nvidia Corp. currently dominates the AI accelerator market. Its Hopper and newer Blackwell architectures are estimated to power between 70% and 95% of AI data-center GPUs. That dominance is reinforced by CUDA, Nvidia’s proprietary programming framework, which has become the de facto standard for AI development thanks to its powerful parallel computing capabilities.
Challengers exist, most notably Advanced Micro Devices Inc., which produces Instinct AI accelerators and maintains the open-source ROCm software stack. But because so many developer tools and inference platforms are written for CUDA, migrating to ROCm is often difficult, leaving AMD at a disadvantage.
That’s where Modular sees its opportunity to loosen this vendor lock-in by giving enterprises more freedom to choose their hardware. Its platform already supports architectures from Nvidia, AMD and Apple’s custom silicon. The company says its latest release delivers performance gains of 20% to 50% compared with leading frameworks such as vLLM and SGLang on next-generation accelerators, including Nvidia’s B200 and AMD’s MI355.
This vision appears to be resonating: AMD, Nvidia and Amazon.com Inc. have all joined as ecosystem partners. Modular has also teamed up with AI application developers such as Inworld AI to accelerate speech synthesis and San Francisco Compute Co., which operates a GPU cluster marketplace.
The company has grown to more than 130 employees, with headquarters in the San Francisco Bay Area. With the new funding, Modular plans to expand hiring across North America and Europe, scale up its cloud platform, extend support for cloud and edge hardware, and broaden its focus from inference into AI training.
Image: SiliconANGLE/Microsoft Designer