News has surfaced that Intel is preparing for the launch of its Diamond Rapids Xeon CPU. This processor, based on the “Oak Stream” platform, is slated for a 2026 release and will provide powerful support for data center, high-performance computing (HPC), and artificial intelligence (AI) workloads.
Core Count and Memory Enhancements #
The top-tier model of the Diamond Rapids Xeon CPU will feature 192 Performance-cores (P-Cores), distributed across four compute dies, each containing 48 cores. This design significantly boosts multi-core performance, enabling the processor to handle large-scale parallel computing tasks within a single CPU socket. This represents a 50% increase in core count compared to the 128-core configuration of the preceding “Granite Rapids” Xeon 6 series.
To meet the demanding computational needs of such a high core count, Intel has equipped the top model with 16-channel DDR5 memory, a substantial increase in memory bandwidth compared to the standard 8-channel configuration. The processor also supports second-generation MRDIMM memory, with a single DIMM transfer rate of up to 12800 MT/s. This, combined with the 16-channel design, ensures ample memory throughput for data-intensive applications like AI training and scientific computing. The single-socket power consumption is set at 500W.
Process Technology and Microarchitecture Innovations #
As Intel’s first mass-produced processor to utilize the 18A process (equivalent to 1.8nm), “Diamond Rapids” marks a significant step forward in manufacturing technology. The 18A process, with its Gate-All-Around (GAA) transistors and backside power delivery, improves transistor density and power efficiency, providing better power control for high-core-count processors.
In terms of microarchitecture, the processor is based on the new “Panther Cove” cores, which further optimize instruction throughput and cache access efficiency compared to the previous “Redwood Cove” cores. Intel has also introduced the next-generation APX (Advanced Performance Extensions) instruction set, enhancing matrix operations and vector processing capabilities. Furthermore, Advanced Matrix Extensions (AMX) efficiency has been significantly improved, supporting more floating-point formats, including NVIDIA’s TF32 and low-precision FP8. These formats are widely used in AI inference and training, allowing the processor to efficiently handle basic inference tasks for smaller AI models and even run some advanced workloads without dedicated accelerators.
AI Acceleration Capabilities #
For AI acceleration, “Diamond Rapids” achieves pervasive AI computing capabilities through on-core AMX units. INT8 inference can reach 2048 floating-point operations per core per cycle, while BF16 and FP16 training models support 1024 floating-point operations. This design enables the processor to flexibly handle AI tasks across edge devices, cloud services, and enterprise data centers. Compared to traditional solutions that rely on external GPUs, Intel’s native CPU AI acceleration solution reduces system complexity and total cost of ownership (TCO). To further support the AI ecosystem, the processor may be released concurrently with Intel’s “Jaguar Shores” AI accelerator in 2026, forming a complete AI computing platform.
IO Expansion and Multi-Socket Configurations #
IO expansion capability is another highlight of “Diamond Rapids.” The processor supports PCIe Gen 6, providing ultra-high bandwidth with up to 128 lanes, a significant improvement over PCIe Gen 5’s 80 lanes. PCIe Gen 6 offers a per-lane rate of 64 GT/s, suitable for connecting high-speed network adapters, storage devices, and external accelerators. Additionally, the processor supports 64 lanes of CXL 2.0, with a data transfer rate of 32 GT/s, enabling memory expansion and sharing to optimize data center memory pooling efficiency.
In multi-socket configurations, the processor utilizes the LGA 9324 socket, supporting 1S, 2S, and 4S architectures. A single server rack can provide up to 768 cores, with a total power consumption of approximately 2000W. This high-density configuration is particularly well-suited for hyperscale cloud computing and HPC applications.
Diamond Rapids introduces Intel Ultra Path Interconnect (UPI) 2.0, increasing cross-socket bandwidth to 24 GT/s, a 20% improvement over the previous generation. Combined with up to 504MB of L3 cache and a 12-channel memory design, the processor demonstrates outstanding performance in scenarios like databases, scientific computing, and AI clusters. According to Intel’s internal tests, its AI application performance is improved by approximately 1.8 to 2.4 times compared to the fifth-generation Xeon, and scientific computing performance is improved by 2.5 times. These figures indicate “Diamond Rapids” holds an advantage in handling complex workloads.
Market Position and Compatibility #
Currently, Intel faces intense competition from AMD EPYC processors. Market data shows that AMD’s share in the server market reached 27.2% in the first quarter of 2025, and Intel urgently needs to regain the initiative through technological innovation. Furthermore, the surging global demand for AI compute power has driven server processors towards high performance and low power consumption. Intel’s 18A process and modular x86 architecture provide flexibility, meeting diverse demands from entry-level servers to high-end HPC.
Diamond Rapids also showcases Intel’s efforts in platform compatibility. Its “Oak Stream” platform maintains partial compatibility with the previous “Birch Stream,” simplifying data center upgrade paths. The processor supports the latest CXL Types 1, 2, and 3 protocols, allowing both standard PCIe and CXL devices to operate simultaneously on the same link, further reducing latency and cost. For enterprise-grade reliability, Intel has enhanced RAS (Reliability, Availability, and Serviceability) features, handling hardware exceptions through Machine Check Architecture (MCA) to ensure stable system operation under high loads.
Conclusion #
The Diamond Rapids Xeon CPU, with its 192 P-cores, 16-channel DDR5 memory, PCIe Gen 6, and native AI acceleration, establishes its competitive advantage in the future server market. Its 2026 release not only signifies Intel’s technological leap in process and architecture but also provides brand-new computing solutions for data centers and AI applications. As the demand for AI and high-performance computing continues to grow, this processor is expected to play a crucial role in the industry.