SK hynix has officially completed the development of its HBM4 (High Bandwidth Memory 4) chips and is ready to begin mass production ahead of schedule. This milestone comes just six months after the company delivered samples of its 12-layer HBM4 to major customers such as Nvidia, marking a critical turning point for AI and high-performance computing (HPC) infrastructure.
Originally planned for early 2026, HBM4 production was pushed forward to the second half of 2025 at the direct request of Nvidia CEO Jensen Huang, who urged SK hynix to accelerate delivery to meet the growing demands of AI workloads.
Why HBM4 Matters for AI and GPUs #
SK hynix has long been a leader in HBM innovation, being the first to supply HBM3E chips to Nvidia in 2024 and the pioneer behind HBM3 production in 2022. With HBM4, SK hynix strengthens its role as the primary memory supplier for Nvidia’s next-generation GPU, Vera Rubin, the successor to Blackwell.
Industry reports indicate that:
- Nvidia’s Rubin Ultra dual GPU will integrate up to 16 HBM4 stacks.
- The Blackwell Ultra GPU, arriving later in 2025, will continue to rely on HBM3E with up to 288 GB of memory.
This tight coupling of GPUs and HBM memory underlines how HBM technology is essential for training massive AI models like ChatGPT.
HBM4 Bandwidth and Performance Breakthroughs #
High Bandwidth Memory achieves its performance advantage by vertically stacking DRAM chips and interconnecting them with wide data channels. SK hynix’s 12-layer HBM4 delivers game-changing improvements:
- Double the I/O terminals: 2,048 vs. 1,024 in HBM3E.
- Bandwidth boost: Exceeds the JEDEC standard 8 Gbps, achieving 10 Gbps throughput.
- World-first milestone: Capable of processing 2 TB/s, equivalent to streaming 400 full-HD movies per second.
- 60% faster than HBM3E thanks to doubled data channels.
- 40% more power efficient, leading to up to 69% AI performance gains.
For AI data centers, where power efficiency and throughput are critical, HBM4 represents a leap forward.
Inside HBM4 Manufacturing: MR-MUF and 1bnm DRAM #
SK hynix’s HBM4 combines cutting-edge materials, processes, and collaborations:
🔹 Mass Reflow Molded Underfill (MR-MUF) #
A proprietary packaging process that:
- Reduces production risks and chip warpage.
- Enhances heat dissipation compared to traditional film-type stacking.
- Ensures stronger mechanical stability during chip stacking.
🔹 10-nm-Class Fifth-Gen 1bnm DRAM #
- Smaller DRAM cells for higher density and lower power consumption.
- Improved performance-per-watt compared to 1anm DRAM.
🔹 Partnership With TSMC #
For the first time, SK hynix partnered with TSMC to integrate its advanced logic base die into HBM4, further strengthening its technology stack and market position against rivals like Samsung.
Market Impact: SK hynix Extends Its Lead #
By delivering HBM4 ahead of schedule, SK hynix cements its first-mover advantage. Competitors Samsung and Micron are expected to launch their HBM4 devices in 2026, giving SK hynix nearly a full year of lead time.
According to TrendForce:
- In 2024, SK hynix controlled 52.5% of the global HBM market.
- Samsung followed with 42.4%, while Micron trailed at just 5.1%.
Post-HBM4, analysts project SK hynix’s market share will rise into the low 60% range by 2026.
Final Thoughts #
SK hynix’s early HBM4 launch is more than a manufacturing milestone—it’s a strategic play that solidifies its role as the memory backbone of the AI revolution. With unmatched bandwidth, power efficiency, and close collaboration with Nvidia, SK hynix is shaping the future of AI infrastructure, GPUs, and high-performance computing.