Key Moments
- Nvidia CEO Jensen Huang said the company’s next-generation chips are now in full production and already being tested by AI firms.
- The Vera Rubin platform, built from six Nvidia chips, is set to launch later this year and can scale into pods with more than 1,000 chips.
- Nvidia is seeking government licenses to ship its H200 chips to China while promoting Rubin-based systems as a major performance upgrade.
Rubin Chips Move Into Full Production
Speaking at the Consumer Electronics Show in Las Vegas, Nvidia CEO Jensen Huang said on Monday that the company’s next-generation chips have entered full production. According to Huang, the new products deliver up to five times the AI computing power of earlier chips used for chatbots and other applications.
Meanwhile, Huang said the chips are already in Nvidia’s labs, where AI companies are actively testing them. The rollout comes as Nvidia faces growing competition, both from traditional rivals and from some of its largest customers.
JENSEN:
“Our next generation GPU architecture, Vera Rubin, is now in full production.”
Seeing the level of pure technology that goes into making a new architecture is absolutely mind blowing.
We get to live during a time where a company is pushing every. single. limit. pic.twitter.com/qv0Gs8aBnS
— amit (@amitisinvesting) January 5, 2026
Vera Rubin Platform Architecture and Performance
Huang also shared new details about the Vera Rubin platform, which is built from six separate Nvidia chips. The platform is expected to debut later this year.
Specifically, the flagship Rubin server will include 72 graphics processing units and 36 new central processors. Together, they form the backbone of Nvidia’s next AI infrastructure push.
In addition, Huang explained how Rubin systems can be linked into large “pods” containing more than 1,000 chips. These configurations can boost the efficiency of generating AI “tokens” by as much as tenfold.
To achieve these gains, Nvidia relies on a proprietary data approach. Huang said the company hopes this technology will see broad adoption across the industry.
“This is how we were able to deliver such a massive performance leap, even with only 1.6 times the number of transistors,” Huang said.
| Feature | Detail |
|---|---|
| Platform name | Vera Rubin |
| Chip count per platform | Six Nvidia chips |
| Flagship server GPUs | 72 graphics units |
| Flagship server CPUs | 36 new central processors |
| Pod scale | More than 1,000 Rubin chips |
| Token efficiency | Up to 10× improvement |
| Transistor increase vs prior chips | 1.6× |
Focus on AI Inference and Context Memory
Nvidia continues to dominate AI model training. However, Huang acknowledged rising pressure in delivering AI results directly to users.
Competition is increasing from firms such as Advanced Micro Devices, as well as from customers like Alphabet’s Google. As a result, Nvidia is shifting more attention toward inference workloads.
To support this effort, the new chips focus heavily on chatbots and similar services. Nvidia is also adding a new storage layer called “context memory storage.”
This feature aims to deliver faster responses to long questions and extended conversations. As demand for real-time AI grows, Nvidia sees this capability as increasingly important.
Networking and Co-Packaged Optics
In parallel, Nvidia unveiled a new generation of networking switches that use co-packaged optics. According to the company, this technology helps connect thousands of machines into a single system.
These switches will compete directly with offerings from Broadcom and Cisco Systems. Nvidia views networking as a critical component of large-scale AI deployments.
Adoption by Cloud and AI Infrastructure Providers
Nvidia said CoreWeave will be among the first companies to adopt the Vera Rubin systems. Additionally, the company expects major cloud providers to follow.
Those customers include Microsoft, Oracle, Amazon, and Alphabet. Their adoption could accelerate Rubin’s deployment across global AI infrastructure.
Advances in Autonomous Driving Software
Beyond data centers, Huang announced new software aimed at supporting self-driving vehicles. The software helps autonomous systems choose routes and generate auditable decision records.
Nvidia previously showcased this software, known as Alpamayo, late last year. On Monday, Huang said the company will now release it more broadly.
Notably, Nvidia will also release the training data behind Alpamayo. This approach allows automakers to better evaluate how the system works.
“By open-sourcing both the models and the data, you can truly understand how the system was built,” Huang said during his keynote.
Groq Talent and Technology Acquisition
Huang also discussed Nvidia’s recent acquisition of talent and chip technology from startup Groq. The move brings in executives who helped Google design its in-house AI chips.
Although Google remains a major Nvidia customer, its internal chips pose a growing competitive challenge. Google has also partnered with Meta and others to reduce reliance on Nvidia hardware.
During a post-speech Q&A, Huang said the Groq deal “won’t affect our core business.” However, he added that it could enable new products over time.
Positioning Against H200 and Blackwell in China
Huang also highlighted how Nvidia’s newest products compare with older chips such as the H200. He noted that U.S. authorities have allowed the H200 to be shipped to China.
According to Reuters, demand for the H200 remains strong in China. At the same time, the trend has raised concerns among U.S. lawmakers.
Huang told analysts that Chinese demand remains robust. Nvidia CFO Colette Kress added that the company has applied for export licenses and is awaiting government approvals before shipping more units.





