By Contributor,Karl Freund
Copyright forbes
NOIDA, INDIA. Long queue of vehicles seen at Noida border
Hindustan Times via Getty Images
In an industry-first, Nvidia has announced a new GPU, the Rubin CPX, to offload the compute-intensive “context processing” off another GPU. Yep, now, for some AI, you will need two GPUs to achieve maximize performance and profit. I would be surprised if the competition doesn’t follow suit; the benefits are tremendous. (Nvidia, like many other semiconductor firms, is a client of my company, Cambrian-AI Research.)
Rubin CPX is designed to handle very long input to LLMs, over 1 million tokens. Not many applications needs such a long context to be encoded for AI processing. But those that do desperately need a better hardware platform that can handle the job; encoding is an extremely compute-intensive process. Modern GPUs are designed for the memory- and network-bound generation phase of LLMs, with expensive HBM memory that isn’t needed for decoding. As Nvidia has been explaining the different needs of these two phases over the last couple years, and highlighted the benefits of disaggregating inference to different GPUs in their MLPerf announcement, many of us began wondering when someone would build a solution tailored for the pre-fill job. Nvidia did just that with CPX, but you may have to wait a year to get it.
Nvidia has identified coding large programs and video processing as applications needing over a million tokens as input.
The Rubin-CPX Processor for Long Context AI
Nvidia estimates that some 20% of AI applications are waiting for the emergence of the first token (Time to First Token, or TTFT) while the GPUs crunch on the decoding work. That can take perhaps five to 10 minutes for 100,000 lines of code. For multi-frame, multi-second videos, pre-processing and per-frame embedding increases latency rapidly; 10–20 seconds or longer is common, varying with video length and LLM capabilities. That is why video LLMs typically are only used today to create short clips.
Nvidia estimates that a $3M investment in GB200 NVL72 can generate $30M in Token revenue
MORE FOR YOU
And as the chart above contends, an AI Factory’s profit increases with performance. Even if the competition were to give away their GPUs for free, today’s GB200 NVL72 can increased token profit by near four fold over the free competition. And one should assume an even better ROI with Blackwell Ultra and Rubin next year. Of course, it will be even better when you add the new CPX to a rack of Rubin GPUs.
Inference is two workloads and now Nvidia has a GPU specifically designed for the Context phase.
If you use the Blackwell GPUs in today’s rack more intelligently, dividing the context and generation across different GPUs, you can increase the performance by three fold with the same cost and energy profile. Now, if you add a GPU that is optimized for long-context decoding, lowering cost by using lex expensive memory, and increasing the attention acceleration by another 3X, the total inference performance can increase by another factor of three.
Inference is two workloads and now Nvidia has a GPU specifically designed for the Context phase.
Nvidia plans to make the Rubin CPX available in two forms. For new installations requiring long-context AI, the Vera Rubin NVL144 CPX adds the CPX chips onto the compute tray housing the Vera CPU and the Rubin GPU, tripling performance of next year’s Vera Rubin.
VR NVL144 CPX jumps the compute from 3.6 Exaflops to 8 EFlops
But hey! You just payed $3M for a shiny new NVL144! No worries. Nvidia will sell you a separate rack full of the right amount of CPX nodes to attach to your Rubin rack. This will increase performance of the Vera Rubin rack from 5 Exaflops to 8 EF, and supports up to 150TB of fast GDDR7 memory.
Customers can add a CPX rack to a NVL144 rack
Nvidia presented the slide below to show the performance improvement of Rubin CPX handling large context windows as up to 6.5X over the GB300.
Rubin CPX is up to 6.5 times faster than Blackwell Ultra for large context length applications
Here’s Nvidia’s updated roadmap through Feynman in 2028. While Nvidia did not announce that the Rubin CPX would give rise to a Rubin Ultra CPX, it can probably be assumed. Nvidia announces products over a year out these days, as data center operators need to plan for future upgrades and expansions. For example, now planners can make room for a CPX rack next to the Rubin racks installed before CPX availability.
The updated Nvidia roadmap
What’s next?
This announcement represents a major milestone in the software and hardware needed to efficiently process inference queries, disaggregating inference processing into two workloads with a GPU tailored for each in the case of long-context windows greater than one million tokens. Others like Google and AMD will certainly evaluate the methods used here, and decide if their customers would benefit.
Disclosures: This article expresses the opinions of the author and is not to be taken as advice to purchase from or invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including Baya Systems BrainChip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Flex, Groq, IBM, Intel, Micron, NVIDIA, Qualcomm, Graphcore, SImA.ai, Synopsys, Tenstorrent, Ventana Microsystems, and scores of investors. I have no investment positions in any of the companies mentioned in this article. For more information, please visit our website at https://cambrian-AI.com.
Editorial StandardsReprints & Permissions