The race for data center network supremacy is heating up. In 2025, with AI and high-performance computing (HPC) workloads driving unprecedented demand, global chipmakers are battling to deliver faster, smarter, and more efficient networking solutions.
At the Hot Chips 2025 conference, Broadcom, NVIDIA, AMD, and Intel unveiled their latest products—switching ASICs, SuperNICs, programmable NICs, and IPUs—all aiming to solve the same challenges: bandwidth, latency, and scalability.
This article compares the Broadcom Tomahawk Ultra, NVIDIA ConnectX-8 SuperNIC, AMD Pollara 400, and Intel IPU E2200, highlighting what each brings to next-gen data centers.
Broadcom Tomahawk Ultra: Ethernet at 51.2T #
Broadcom is determined to make Ethernet viable for AI and HPC workloads with its flagship Tomahawk Ultra switch.
- Throughput: 51.2Tbps switching capacity with 512 × 100G-PAM4 ports
- Packet Processing: 77B packets/sec at 64-byte size
- In-Network Computing: Supports collective communication ops for AI
- Key Features:
- Link Layer Retry (LLR) for reliable transfers
- Credit-Based Flow Control (CBFC) to prevent buffer overloads
- AI Fabric Header (AFH) for efficient payload-to-header ratios
- Adaptive topology-aware routing and congestion control
Broadcom positions Tomahawk Ultra as a low-latency, small-packet optimized switch, critical for distributed AI training and HPC.
NVIDIA ConnectX-8 SuperNIC: PCIe Gen6 and 800GbE #
NVIDIA responded to rising competition with its ConnectX-8 SuperNIC, now shipping in volume.
- Speed: 800GbE, PCIe Gen6 support
- Architecture: Works across both Spectrum-X Ethernet and Quantum-X InfiniBand
- Expansion: Built-in PCIe Gen6 switch (up to 48 lanes)
- Key Features:
- Optimized for AI training & inference
- NCCL acceleration for AllReduce and AllToAll ops
- Integrated congestion control via Spectrum-X
- Data Path Accelerator (DPA) with RISC-V cores
- PSA programmable packet pipeline
The GB200 NVL72 cluster is the first to integrate this SuperNIC, enabling seamless GPU-to-GPU connectivity in AI supercomputers.
AMD Pollara 400: Programmable, UEC-Compliant AI NIC #
AMD’s Pollara 400 AI NIC takes a different approach, embracing programmable networking and UEC (Ultra Ethernet Consortium) standards.
- Speed: 400GbE, optimized for AI workloads
- Design Choice: 1:1 GPU-to-NIC mapping instead of PCIe switches
- Programmability: Built with the P4 language for packet pipeline customization
- Key Features:
- Atomic memory operations for data consistency
- Pipeline cache coherence for speed
- Congestion control tailored for AI clusters
- Multipathing & selective retransmission (SACK)
- Tight integration with RCCL (AMD’s NCCL equivalent)
By combining UEC standardization with programmable hardware, AMD aims to build a scalable, open ecosystem for AI data centers.
Intel IPU E2200: Offloading the Data Center #
Intel’s IPU E2200—codenamed Mount Morgan—is its latest push into infrastructure processing, built on TSMC’s N5 process.
- Throughput: 400Gbps, rivaling NVIDIA BlueField-3 and AMD Salina 400
- Compute: Up to 24 Arm Neoverse N2 cores, 4-channel LPDDR5 memory
- Flexibility: Three modes—Multi-Host, Headless, Converged
- Key Features:
- P4-programmable packet processor (FXP)
- Dual encryption engines (online + lookaside)
- PCIe Gen5 x32 lanes with integrated switch
- RDMA engine for HPC/AI workloads
- Traffic Shaper with Timing Wheel Algorithm
The E2200’s focus is clear: offload infrastructure tasks to free up CPU/GPU resources. Intel’s challenge will be ecosystem adoption, but it already has hyperscale partners like Google on board.
Who Wins the 2025 Networking Race? #
Each vendor is tackling the AI networking bottleneck differently:
- Broadcom → Ethernet switches with ultra-low latency
- NVIDIA → High-bandwidth SuperNICs tightly coupled with GPUs
- AMD → Programmable AI NICs aligned with open UEC standards
- Intel → Flexible IPUs for offloading infrastructure workloads
The competition ensures rapid innovation in data center networking. For enterprises scaling AI clusters, the key takeaway is that choice is expanding—and multi-vendor strategies may deliver the best mix of performance, cost, and flexibility.