Samsung Prepares HBM4 Mass Production for Nvidia Rubin

Samsung plans to begin mass production of HBM4 memory in February 2026, targeting Nvidia's Vera Rubin AI accelerator and Google TPUs. Using a 10nm-class base die, Samsung claims up to 11.7 Gbps speeds amid tight supply.

Comments
Samsung Prepares HBM4 Mass Production for Nvidia Rubin

3 Minutes

Samsung is gearing up to start mass production of its sixth-generation high-bandwidth memory (HBM4) chips in February 2026 — a move that could reshape memory supply for next-gen AI accelerators. The chips are expected to play a key role in Nvidia's upcoming Vera Rubin system and will also find use in cloud TPU platforms.

What’s changing and why it matters

After losing ground with HBM3E supplies, Samsung appears determined not to repeat that mistake. Reports say its HBM4 modules passed Nvidia's quality checks and will be produced at the Pyeongtaek campus. SK Hynix is reportedly aligned on a similar schedule, but the two vendors are taking different technical approaches that could affect performance and buyer preference.

Key details at a glance

  • Mass production target: February 2026 at Samsung’s Pyeongtaek facility.
  • Primary customer: Nvidia’s Vera Rubin AI accelerator, launching in the second half of 2026.
  • Additional customers: Select supply planned for Google’s seventh-generation TPUs.
  • Technical edge: Samsung uses a 10nm-class base die vs SK Hynix’s 12nm; Samsung’s internal testing hit speeds up to 11.7 Gbps.

Imagine AI data centers hungry for memory — now picture most of the available HBM4 capacity already pre-sold. Industry sources say production slots for HBM from both Samsung and SK Hynix are largely booked for the coming year. That scarcity raises the stakes: cloud providers and AI labs are racing to secure chips to avoid bottlenecks in their hardware rollouts.

Performance and market implications

The fabrication choices matter. Samsung’s 10nm-class process for the HBM4 base die reportedly yields higher bandwidth and efficiency compared with SK Hynix’s 12nm approach. In practical terms, that can translate to faster model training and reduced latency for large-scale AI workloads — a competitive advantage for customers choosing Samsung-powered accelerators.

For Samsung, the timing is lucrative. With demand outstripping supply, selling HBM chips to Nvidia, Google and other AI firms could generate substantial revenue over the next few years. For the buyer side, a tight market means planning and procurement strategies will be decisive: securing HBM4 now could determine which firms ship next-generation AI systems on schedule.

Watch for more details as Nvidia unveils Vera Rubin later in 2026 and as vendors release formal specs for HBM4 modules. The next wave of memory technology may be less about raw chips and more about who locks in capacity first.

Source: sammobile

Leave a Comment

Comments