Skip to main content

AMD Unveils Memory Patent to Double Bandwidth with HB-DIMM Technology

·425 words·2 mins
AMD HB-DIMM Memory Bandwidth DDR5 APU IGPU AI Computing High-Bandwidth Memory
Table of Contents

Against the backdrop of rising processor performance, memory bandwidth is increasingly becoming a system bottleneck. AMD has recently filed a patent for High Bandwidth DIMM (HB-DIMM), a technology that promises to double memory bandwidth without requiring faster DRAM chips. Instead of depending solely on manufacturing process upgrades, HB-DIMM achieves higher throughput by embedding additional logic directly into the memory module.

AMD Memory Patent

How HB-DIMM Works
#

At the core of HB-DIMM is the use of RCD (Register/Clock Driver) and data buffer chips on a standard DDR5 module. By applying retiming and multiplexing, the module merges two DRAM data streams into a single higher-speed output.

For example:

  • Current DDR5 → 6.4 Gb/s per pin
  • With HB-DIMM → 12.8 Gb/s per pin

This effectively doubles the bandwidth while keeping the existing DDR5 manufacturing process unchanged. Unlike traditional approaches that push DRAM process scaling, HB-DIMM improves performance at the module level, making it easier and faster to adopt.

Applications in AI, Data, and APUs
#

The patent highlights HB-DIMM’s potential in AI training, large-scale data processing, and integrated graphics.

  • AI/ML workloads → Faster data access improves training and inference efficiency.
  • APUs/iGPUs → Overcome the bandwidth bottleneck of shared system memory, boosting graphics and AI responsiveness.
  • Dual PHY Design → A standard DDR5 PHY manages regular memory, while an HB-DIMM PHY handles a smaller, high-speed memory pool—balancing capacity and bandwidth.

AMD Memory Patent

Challenges: Power and Cooling
#

Merging two data streams into one high-speed signal requires extra logic and circuitry, which increases power consumption and heat output.

This means:

  • Systems will need stronger cooling solutions.
  • Power efficiency must be carefully balanced, especially for laptops and compact PCs.

Still, compared to the slow and costly evolution of DRAM processes, HB-DIMM offers a faster path to bandwidth scaling by focusing on DIMM-level innovation.

AMD Memory Patent

AMD’s History in Memory Innovation
#

AMD is no stranger to memory breakthroughs. Its collaboration with SK Hynix to create HBM (High Bandwidth Memory) reshaped GPU memory with 3D stacking and ultra-wide buses.

  • HBM → Achieves bandwidth through wide interfaces and stacking.
  • HB-DIMM → Achieves bandwidth through logic multiplexing.

Both approaches show AMD’s multi-dimensional strategy in tackling memory bottlenecks.

Final Thoughts
#

If HB-DIMM proves commercially viable, it could reshape memory architecture across data centers, AI accelerators, and even consumer APUs. By doubling bandwidth without needing new DRAM chips, AMD offers the industry a cost-effective, scalable path to keep pace with growing computing demands.

As AI, graphics, and high-performance computing workloads continue to expand, HB-DIMM could become a key differentiator in AMD’s product lineup—and a powerful tool for the wider semiconductor industry.

Related

AMD Unveils New DDR5 Patent and ASUS Turbo Radeon AI Pro R9700 GPU
·333 words·2 mins
AMD DDR5 ASUS Radeon R9700 AI GPU Memory Technology
AMD’s MI350 Sees 70% Price Jump as It Targets AI Acceleration Leadership
·736 words·4 mins
AMD AI Computing MI350 Price Increase
Ryzen AI Max Pro 385 Processor Appears in Geekbench Database
·594 words·3 mins
AMD Ryzen AI Max Pro 385 RNDA 3.5 APU