Key points
- Nvidia used CES on January 5, 2026 to reinforce that its next major data-centre platform, Vera Rubin, is “in full production,” with systems expected to roll out via partners in the second half of 2026.
- The investor-relevant shift is that AI is moving from building models to running AI for real users at scale, which tends to broaden the opportunity set beyond GPUs into networking, memory/storage, and data-centre infrastructure.
- Nvidia also pushed “physical AI” (robotics and autonomous systems) as a longer-duration growth option, but timelines and adoption uncertainty remain high.
What happened?
CES, the annual Consumer Electronics Show in Las Vegas, is a major annual showcase for technology products and platforms. For markets, it is a useful signal on where innovation spending and commercial adoption may be heading.
At CES on January 5, 2026, Nvidia CEO Jensen Huang used the keynote to push three headline messages:
- Rubin is coming, and Nvidia says it’s already “in full production.”: Nvidia framed Vera Rubin as the next major data-center platform and said systems built with it should arrive through partners in the second half of 2026. Huang claimed performance of ~5× vs prior platforms and large reductions in cost per inference token.
-
- Microsoft and CoreWeave were cited as early adopters for Rubin-based data centres
- It’s not just a chip—it’s a full “platform” built to run AI at massive scale: Coverage highlighted that Nvidia presented Rubin as an integrated stack (CPU/GPU plus networking and data-center components).
- Nvidia doubled down on “physical AI”: Nvidia released open models and tooling aimed at accelerating real-world applications in robotics and autonomous systems.
- Nvidia said it has an expanded partnership with Siemens, with Nvidia’s stack integrating with Siemens’ industrial software for “physical AI” from design/simulation through production.
- Nvidia also announced the first autonomous vehicle passenger car featuring Alpamayo built on NVIDIA DRIVE will be the all-new Mercedes-Benz CLA, with “AI-defined driving” coming to US, Europe and Asia this year.
- Other listed robotics partners included Bosch, Fortinet, Salesforce, Hitachi and Uber using Nvidia’s open model technologies.
Why this matters for investors
In our opinion, Nvidia’s messages at CES 2026 had several key signals for the company and the AI theme at large.
1) CES was less about “new technologies” and more about protecting the AI spending cycle
We think Huang’s real target was investor confidence in the AI spending cycle. He was signalling that the next upgrade wave is already mapped out and that Nvidia is trying to pull more of the value chain into an integrated platform story, not just a GPU story.
This matters because the market’s next debate is likely to focus less on whether AI is exciting and more on whether AI can be run reliably and cheaply enough for mass usage to translate into sustainable earnings across the tech ecosystem.
2) The opportunity set can broaden from “chips” to the “AI infrastructure enablers”
If AI usage continues to expand, the constraints often show up in areas such as data movement, memory access, and data-centre efficiency. That is why Nvidia’s CES emphasis on a full platform has broader read-through for parts of tech that support large-scale AI usage, including networking and connectivity, memory and storage, and data-centre infrastructure.
3) “Physical AI” is a long-duration call option—useful, but don’t price it like next quarter’s revenue
Nvidia’s robotics and autonomy push is strategically important (open models + tooling + partnerships), but markets have seen “next big things” overpromise before. The investable takeaway is optionality, not certainty.
Market playbook: How investors can express the theme (information purposes only)
Below are ways to think about positioning, not recommendations.
Theme A: “AI capex continues”
Multi-year build-outs can support revenue visibility across several layers of the ecosystem. Investors may watch segments that historically benefit from sustained data-centre build-outs, including core compute, foundry/packaging supply chains, and large-scale data-centre platform providers.
The key risk is that expectations can run ahead of reality, and even strong growth can disappoint if it is less strong than priced.
Theme B: “AI shifts from training to serving” (the inference era)
CES messaging leaned on making AI cheaper, faster, and more reliable to run for users. This phase can broaden leadership beyond the headline chip names. Segments that can matter when usage scales, are those such as networking and connectivity, memory and storage, and data-centre efficiency.
The key risk is that competition intensifies here as hyperscalers and rivals pish for custom silicon, in-house systems and alternative designs. The margin debate often gets louder in the inference phase.
Theme C: “Physical AI” (robots + autonomous systems)
Robotics and autonomy are a longer-duration theme that could lift parts of semis, industrial automation, sensors, and edge computing over time. The potential benefit is meaningful optionality if adoption accelerates.
The key risk is that adoption timelines can be long, and technology milestones do not always convert into scaled commercial demand.
Risks investors should consider
- Timing risk: partner availability is 2H 2026, leaving room for hype to front-run reality and for delays to matter.
- Benchmark gap: Nvidia’s performance claims are company-stated; independent validation and real-world TCO (total cost of ownership) will be the market’s judge.
- Competitive pressure: inference economics is exactly where hyperscalers and rivals push hardest (custom chips, alternative stacks).
- AI capex digestion: even if AI is structural, budgets are cyclical—order timing, pauses, and “wait-and-see” quarters can happen.
What to watch next
- Do hyperscalers reaffirm capex plans in the next earnings cycle? That’s the oxygen for the whole chain.
- Pricing signals: do AI infrastructure costs per workload keep falling (a sign the inference era is working)?
- Breadth: are investors rewarding only the megacaps, or do “AI plumbing” beneficiaries start to lead?
- Volatility: if “AI optimism” becomes crowded again, pullbacks can be violent even without bad fundamentals—good for risk management, not great for complacency.
Bottom line
Huang’s CES message can be summarised in one sentence: AI is moving from a breakthrough story to an operating model story.
That shift can broaden opportunity across tech, but it also raises the bar for proof, because the next phase is judged on economics and execution rather than excitement.
Read the original analysis: CES 2026: Nvidia’s playbook for the next phase of AI
Read the full article here