
Auteur: Siu-Ho Fung
June 16, 2025
At GTC Paris, NVIDIA CEO Jensen Huang delivered a powerful keynote that left the audience speechless. In a presentation filled with breakthroughs, humor, and vision, Huang unveiled NVIDIA's latest AI computing innovations, setting the stage for a new industrial revolution powered by AI.
Huang emphasized the limitations of Moore’s Law, which now delivers only a 2x performance gain every few years. In contrast, NVIDIA’s Blackwell architecture, the successor to Hopper, achieves 30 to 40 times more performance in a single generation. This leap enables a new era of reasoning AI models, which generate exponentially more tokens and require much greater computational power.
The Blackwell-based systems are not just powerful, they’re monumental. Each unit weighs two tons, comprises over a million parts, and costs approximately $3 million to manufacture, with an estimated $40 billion in R&D poured into the platform. Despite the scale, NVIDIA is now mass-producing 1,000 supercomputers per week, an unprecedented feat.
One of the most jaw-dropping moments was Huang’s presentation of NVLink, a proprietary interconnect with 130 terabytes per second bandwidth, more than the peak global internet traffic. The NVLink spine uses 100% copper coaxial cabling to connect 72 Blackwell chips seamlessly, effectively turning them into a giant virtual GPU. This enables unmatched scalability and performance in AI factories.
In what has become an iconic and humorous anecdote, Huang recalled the 2016 launch of NVIDIA’s DGX-1 AI supercomputer. The reaction at the time? Total confusion. No applause, no customers, no real interest. “Why would anyone build a computer like this? Does it run Windows? Nope” Huang recalled.
Shortly after, he received an email from a small startup in San Francisco. They were excited and asked if they could have one. Delighted, Huang loaded a DGX-1 into his car and drove it up to their office. “I thought, ‘Wow, we sold our first one!’” he said. “And then I found out… it was a nonprofit.”
That startup? OpenAI.
NVIDIA is making AI infrastructure accessible through scalable systems. From Grace Blackwell DGX racks to desktop-sized DGX Station systems, the architecture remains consistent. Developers can build anywhere, cloud, edge, or local, and rely on seamless compatibility.
Additionally, NVIDIA introduced the RTX Pro server, capable of running nearly every application ever written, including Windows, Linux, Kubernetes, and yes, even Crysis 😛. It's the only server in the world that supports both the complete NVIDIA software stack and general-purpose enterprise workloads.

Huang highlighted the progress of open-source models like Mistral, LLaMA, and DeepSeek. Through NVIDIA's Nemotron initiative, these models are post-trained, enhanced, and packaged into downloadable NIMs, ready-to-deploy APIs for advanced AI. These models now top global benchmarks and remain open and customizable.
NVIDIA’s AI innovations extend into the physical world through Omniverse, a platform enabling companies like BMW and Toyota to build digital twins of factories and warehouses. These virtual environments simulate physics with photorealism, allowing robots to learn before entering the real world.
Huang also revealed progress in humanoid robotics, showcasing a robot trained entirely in simulation. With partnerships like Disney Research and DeepMind, NVIDIA is pushing the boundaries of physics-based virtual environments. These environments are where robots learn to walk, balance, and eventually operate in human environments, all within Omniverse before entering the physical world.
Jensen outlined the evolution from one-shot models (like early ChatGPT) to Agentic AI, systems capable of reasoning, planning, and solving problems through iterative token generation. The rise in token volume, from hundreds to tens of thousands per session, justifies the need for advanced infrastructure like Grace Blackwell.
Agentic AI represents a dramatic leap from earlier generation chatbots. These systems don’t just answer, they think, break down problems, evaluate alternatives, and use tools. That kind of reasoning requires entirely new performance levels and system designs.
Huang introduced DGX Lepton, a platform that lets developers deploy AI across any cloud or on-prem system using a single architecture. These "AI factories" will form the core of future infrastructure, optimized to generate tokens, the new fuel of digital economies.
With Lepton, developers can run AI models anywhere, from local workstations to hyperscale cloud platforms, through a unified deployment pipeline. It’s the foundation of what Huang called the “Supercloud”, a consistent development and runtime experience, regardless of infrastructure provider.
Huang closed with an optimistic outlook on AI growth in Europe, noting that regional infrastructure will grow tenfold in the coming years. With humor, technical brilliance, and visionary clarity, his presentation made it clear: the AI industrial revolution has arrived, and NVIDIA is building its foundation.
“Everything that moves will be AI-driven.” - Jensen Huang


Hebt u vragen of hulp nodig? Wij helpen u graag.