Key Moments
- Nvidia’s GTC conference is set to spotlight advances across chips, data centers, CUDA software, AI agents, and robotics as it seeks to maintain AI leadership.
- Analysts anticipate Nvidia will outline a full-stack roadmap from Rubin to Feynman while addressing rising competition in both training and inference markets.
- Recent strategic moves, including the $17 billion Groq acquisition and $2 billion investments in Lumentum and Coherent, are expected to feature prominently at GTC.
Investors Focus on Strategy as GTC Opens
In San Francisco, Nvidia CEO Jensen Huang is preparing to address a packed hockey arena as he opens the company’s annual developer conference on Monday, with the spotlight on new products and alliances aimed at preserving Nvidia’s edge in artificial intelligence amid intensifying rivalry.
The four-day Nvidia GTC event, which effectively occupies the center of Silicon Valley for much of the week, has evolved into Huang’s main platform to unveil progress across the company’s AI portfolio. This includes advances in chips, data center technologies, CUDA programming software, AI agents, and physical AI systems such as robots.
This year’s gathering carries added weight for investors looking for evidence that Nvidia’s approach of aggressively reinvesting profits into the broader AI ecosystem is translating into sustained growth and competitive strength.
“I expect Nvidia to present a full-stack roadmap update from Rubin to Feynman while emphasizing inference, agentic AI, networking, and AI factory infrastructure,” said eMarketer analyst Jacob Bourne, referencing Nvidia’s current and upcoming chip generations.
Market Leadership Amid Shifting AI Workloads
Nvidia’s hardware sits at the core of vast data center buildouts valued in the hundreds of billions of dollars by governments and corporations worldwide. However, the company is increasingly challenged by rival chipmakers and large customers that are designing their own specialized processors.
Analysts told Reuters they expect the overall AI chip market to continue expanding, even as Nvidia’s share may narrow as the landscape evolves. A key trend is the rise of AI agents that shuttle between software applications to execute tasks for human users, moving beyond the training phase in which AI labs connect large numbers of Nvidia chips to process massive datasets for refining models.
As these agents proliferate, analysts anticipate the emergence of an additional AI control layer, often called an “orchestration” layer, to mediate between human users and their networks of agents. This development is seen as a sign that AI is becoming more practical and embedded in everyday workflows.
At the same time, the type of work known in the industry as “inference” can run on a broader mix of chips, including custom processors developed by major Nvidia customers. Companies such as OpenAI and Meta – which recently indicated it plans to roll out new AI chips every six months – are working on their own solutions.
“Nvidia is definitely going to see more competition compared to a year ago,” said KinNgai Chan, a managing director at Summit Insights Group. “Nvidia still has close to over 90% market share in both training and inference markets today.”
“We think Nvidia will begin to see share loss starting in 2027, once in-house ASIC programs gain some scale especially in the inference market,” he added, referring to application-specific integrated circuits designed for particular functions or custom workloads that can be more efficient than general-purpose graphics processing units.
Strategic Acquisitions and Product Plans
Nvidia has moved to reinforce its position through acquisitions and product expansion.
In December, the company agreed to spend $17 billion to acquire Groq, a chip startup focused on delivering fast and low-cost inference computing. During the company’s earnings call last month, Huang said Nvidia would use GTC to demonstrate how Groq’s ultra-fast AI capabilities can be integrated into the existing CUDA platform.
William McGonigle, an analyst at Third Bridge, said his firm expects Nvidia to introduce a new family of servers that pair Groq’s processors with Nvidia’s networking solutions to offer a high-speed, cost-effective system.
CPUs Regain Prominence Alongside GPUs
Another competitive pressure point for Nvidia is the central processing unit, or CPU, traditionally associated with companies such as Intel and Advanced Micro Devices. While GPUs have dominated AI workloads in recent years, McGonigle noted that CPUs are “back in focus” and said he expects Nvidia to highlight servers built solely around its own CPUs, which Huang promoted on a recent earnings call.
“With the rise of agentic AI, the bottleneck is now at the agent orchestration level, which is carried out by the CPUs,” McGonigle said.
Optical Investments to Accelerate Data Center Connectivity
Analysts also anticipate more detail from Nvidia on its decision to invest $2 billion each in Lumentum and Coherent. Both companies manufacture lasers used to transmit data between chips via light beams.
These lasers are key components of co-packaged optics, a technology that could enhance the speed of connections among Nvidia’s chips within large-scale data centers. However, current production volumes are not yet sufficient to match the number of chips Nvidia ships annually.
“Nvidia will likely frame co-packaged optics as key to connecting massive AI clusters more efficiently, but the challenge is making it affordable enough to deploy at scale,” said eMarketer’s Bourne.
Strategic Snapshot
| Focus Area | Detail |
|---|---|
| Conference platform | Nvidia GTC, four-day developer conference centered on AI chips, data centers, CUDA, AI agents, and robots |
| Roadmap emphasis | Full-stack update from Rubin to Feynman, with focus on inference, agentic AI, networking, and AI factory infrastructure |
| Market position | Close to over 90% share in training and inference markets today, with potential share loss starting in 2027 |
| Key acquisition | $17 billion purchase of Groq for fast, low-cost inference computing and CUDA integration |
| Optics investments | $2 billion each in Lumentum and Coherent to support co-packaged optics for data center connectivity |





