Fueled by the AI boom, NVIDIA continues to lead the market with its powerful GPU technology. At the GTC conference held in March 2025, NVIDIA CEO Jensen Huang revealed that the company has sold over 3 million Blackwell AI GPUs this year. Industry forecasts suggest that by 2027, NVIDIA’s AI-related revenue could surpass $1 trillion, with its data center business serving as the primary driver. Meanwhile, former Intel CEO Pat Gelsinger, during a podcast interview at GTC, offered a differing perspective on NVIDIA’s AI strategy. He argued that Huang’s success in the AI field involves a degree of luck and pointed out that NVIDIA’s GPUs are excessively costly for AI inference tasks, leaving room for improvement in cost-effectiveness.
NVIDIA’s Blackwell GPU is the star product in today’s AI market. Originally designed for graphics processing, it unexpectedly became the core engine for AI training and inference. At the conference, Huang showcased the latest advancements in Blackwell, such as the DGX Spark and Station AI computing systems, highlighting their potential in robotics, autonomous driving, and generative AI. For instance, a DGX system equipped with eight Blackwell GPUs, when processing the 671 billion-parameter DeepSeek-R1 model, achieved a single-user inference speed exceeding 250 tokens per second and a maximum throughput of 30,000 tokens per second—representing a 36-fold performance increase compared to earlier this year, while costs dropped by 32 times. These gains stem from innovations in the Blackwell architecture, including the second-generation Transformer Engine and fifth-generation NVLink interconnect technology. However, Gelsinger noted that while these GPUs excel in training large models, there’s still room for improvement in resource utilization and cost management during the inference stage. He pointed out that the tens-of-thousands-of-dollars price tag for a high-performance GPU might deter enterprises from deploying large-scale inference tasks.
In contrast, Intel’s progress in the AI market has been sluggish. During his tenure, Gelsinger repeatedly expressed wariness of NVIDIA, describing its CUDA ecosystem as an “insurmountable moat” and identifying AI inference as the key battleground of the future. However, Intel’s Gaudi series accelerators have failed to compete head-on with NVIDIA’s Hopper or AMD’s Instinct. Although Gaudi 3, launched in 2024, touted a price advantage, its performance lagged behind NVIDIA’s H100—let alone the latest Blackwell series. Intel had pinned high hopes on its Falcon Shores GPU, planned for release by the end of 2025, which was touted as a fusion of Xe architecture and Gaudi technology to challenge NVIDIA’s dominance. However, earlier this year, the company canceled the product line, redirecting resources to a next-generation rack-scale solution called Jaguar Shores. Intel’s new CEO, David Chen, has promised to revitalize the company’s competitiveness since taking office, though tangible results will take time to materialize.
NVIDIA’s success is no fluke. Long before the AI wave took off, Jensen Huang insisted on expanding GPUs from graphics rendering to general-purpose computing—a foresight that allowed him to seize the opportunity presented by the deep learning boom. Today, NVIDIA not only leads in hardware but has also solidified its market position through software ecosystems like TensorRT and NIM microservices. At GTC 2025, Huang unveiled future plans: the Blackwell Ultra, set for release in the second half of 2025, will enhance the existing architecture, while the Rubin GPU, slated for 2026, will introduce HBM4 memory to boost bandwidth and energy efficiency. These products will integrate closely with projects from partners like Google DeepMind, Disney, and General Motors, driving AI’s real-world applications.
While Gelsinger questioned NVIDIA’s inference costs, he also acknowledged Huang’s resilience as a key to his payoff. In the podcast, he recalled Huang’s stance that he never intended to build chips specifically for AI but approached it from the perspective of solving computational workloads, ultimately gaining a first-mover advantage in the AI space. Gelsinger also looked ahead to the future of computing, suggesting that quantum computing could be the next breakthrough, predicting its commercial viability by the end of the century. For now, however, the AI market remains focused on GPU-driven training and inference, with NVIDIA undeniably leading the race.
NVIDIA, leveraging its first-mover advantage and continuous innovation, is projected to achieve $195 billion in revenue for its 2025 fiscal year, with data center GPUs accounting for the lion’s share. By comparison, Intel’s full-year revenue in 2024 was just $14.3 billion, down 7% year-over-year, with its AI business contributing negligibly. The strategic divergence between the two companies is stark: NVIDIA focuses on full-stack AI solutions, while Intel seeks a breakthrough through its x86 ecosystem and cost optimization. Whether Jaguar Shores can help Intel turn the tide remains uncertain. Meanwhile, NVIDIA’s Blackwell series is capturing the market at a staggering pace, leaving competitors with little time to catch up.