AMD DDR5 Patent: Bandwidth Doubled to 12.8 Gbps #
AMD has filed a new DDR5 patent featuring a High Bandwidth Dual In-line Memory Module (HB-DIMM) architecture that can effectively double memory speed from 6.4 Gbps to 12.8 Gbps.
The design uses pseudo-channels and smart signaling, allowing each HB-DIMM to operate at twice the data transfer rate through a buffer chip. Interestingly, this speed approaches the current DDR5 overclocking ceiling of about 13 Gbps.
A major advantage of this patent is that it does not alter the underlying DDR5 standard. Instead, it extends performance through additional techniques, ensuring compatibility with existing platforms without requiring a complete overhaul.
As both AI and GPU graphics workloads demand higher bandwidth, this approach could ease dependence on costly HBM memory for consumer adoption. However, since it is still at the patent stage, no commercial release timeline is available.
Product Spotlight: ASUS Turbo Radeon AI Pro R9700 32GB #
Alongside AMD’s DDR5 update, ASUS has officially launched the Turbo Radeon AI Pro R9700 32GB GPU, targeting AI developers and professional users with efficient cooling and compact design.
This card is ASUS’s first AMD GPU to feature the 12V-2x6 power connector. It comes with 32 GB of GDDR6 memory on a 256-bit bus, built into a dual-slot blower-style cooling system. The board measures 26.7 cm in length and requires at least a 750W power supply.
ASUS highlights that its reinforced die-cast shroud and backplate can reduce memory temperatures by up to 16%, while the phase-change GPU thermal pad conducts heat more effectively than traditional thermal paste for long-term stability.
The GPU also supports GPU Tweak III one-click overclocking, boosting the clock speed to 2940 MHz (20 MHz higher than reference) with a game frequency of 2370 MHz, making it the first officially overclocked R9700.
Launched in July, the Radeon AI Pro R9700 has no AMD reference model. It integrates 128 AI accelerators and delivers 1531 TOPS (INT4) performance, making it suitable for AI inference, training tasks, and professional workstation workloads.