At the 2025 GTC conference, Nvidia officially unveiled the RTX Pro 6000 GPU, based on the Blackwell architecture, targeting professional workstations and server markets. This GPU utilizes the same GB202 chip as the consumer-grade RTX 5090 but delivers significant upgrades in performance and specifications, making it a device tailored for professional users such as designers, data scientists, and AI developers. The RTX Pro 6000 comes in three variants: a workstation edition, a Max-Q workstation edition, and a server edition, designed to meet high-performance demands across various scenarios.
The core specifications of the RTX Pro 6000 are impressive. It features 188 streaming multiprocessors (SMs), with only 2% disabled compared to the GB202 chip’s maximum of 192 SMs. Compared to the RTX 5090’s 170 SMs, this represents a 10.6% increase, delivering a powerful lineup of 24,064 CUDA cores, 752 Tensor cores, and 188 ray-tracing cores. Nvidia claims its single-precision floating-point performance (FP32) reaches up to 125 TFLOPS, while its AI computing power hits an astonishing 4,000 TOPS (based on FP4 precision), suggesting a boost clock of approximately 2.6GHz. Additionally, the card comes equipped with a full 128MB L2 cache—outperforming the RTX 5090’s 96MB—along with four NVENC encoders and four NVDEC decoders, enhancing video processing capabilities.
Memory configuration is a standout feature of the RTX Pro 6000. It employs 24Gb (3GB) GDDR7 memory chips, unlike the 2GB chips commonly found in the RTX 50 series, achieving a massive 96GB memory capacity via a 512-bit memory bus. With a memory speed of 28Gbps, it delivers a total bandwidth of 1,792GB/s. This design is particularly suited for handling large datasets and complex AI model training tasks. In contrast, the RTX 5090 offers only 32GB of memory, with the same bandwidth but a clear gap in capacity.
The RTX Pro 6000 series also boasts unique features. For instance, the workstation and server editions support Multi-Instance GPU (MIG) technology, allowing a single GPU to be split into up to four independent instances (each with 24GB of memory), enhancing multitasking and parallel processing capabilities. Additionally, the ninth-generation NVENC encoder adds 4:2:2 encoding support, while the sixth-generation NVDEC decoder doubles H.264 decoding throughput, making it ideal for video editing and real-time streaming applications.
In terms of performance, the RTX Pro 6000 outperforms the previous Ada Lovelace-based L40S GPU. In AI inference tasks, its large language model throughput increases by up to 5x, genomic sequencing speeds improve by nearly 7x, text-to-video generation accelerates by 3.3x, and recommendation system inference and rendering performance each see roughly 2x gains. These figures highlight the RTX Pro 6000’s clear advantages in AI, scientific computing, and visual creation fields.
Pricing has not yet been disclosed, but based on historical pricing for professional-grade GPUs, the RTX Pro 6000 is likely to exceed $10,000—far above the RTX 5090’s approximate $3,000 retail price. This premium reflects its professional market positioning and top-tier hardware specifications.
The release of the RTX Pro 6000 Blackwell series marks yet another leap forward for Nvidia in the professional computing space. With its massive 96GB memory, formidable AI computing power, and versatile variant designs, it promises to deliver a groundbreaking experience for tech enthusiasts and professionals requiring extreme performance. More details and real-world test data are eagerly anticipated in the coming days.