3 Minutes
Samsung is already gearing up to push memory performance further: after debuting its first HBM4 at SEDEX 2025, the company plans to unveil a significantly faster HBM4 variant at ISSCC 2026. Here’s what we know about the speed bump, the engineering behind it, and why it matters for AI and high-performance computing.
A big jump in bandwidth: 2.4TB/s to 3.3TB/s
At SEDEX last month Samsung introduced its first sixth-generation high-bandwidth memory (HBM4) chip with a bandwidth of 2.4TB/s. According to industry reports, the new HBM4 Samsung will show at the International Solid-State Circuits Conference (ISSCC) — scheduled for February 15–19, 2026 in San Francisco — pushes peak bandwidth up to about 3.3TB/s. That’s roughly a 37.5% increase over the earlier part.
What changed under the hood?
Sources say the performance gain stems from a redesigned stacked structure and an updated interface that together raise throughput while improving power efficiency. Samsung reportedly used 1c DRAM process variants for its HBM4 stacks, while rivals like Micron and SK Hynix chose 1b DRAM for their competing modules. In plain terms: Samsung’s architecture tweaks and DRAM choice aim to squeeze more bandwidth per watt out of the same package footprint.
Key specs at a glance
- Earlier HBM4 bandwidth: 2.4TB/s
- Upcoming HBM4 bandwidth: up to 3.3TB/s
- Improvement: ~37.5%
- Design changes: new stacked structure and interface
- DRAM variant: Samsung using 1c DRAM vs rivals' 1b

Why it matters for AI and servers
High-bandwidth memory is a critical bottleneck in AI accelerators and data-center GPUs. More bandwidth reduces memory stalls, speeds up model training and inference, and can lower overall power consumption per operation. After earlier heat-related setbacks in HBM products, Samsung appears to have pushed design fixes and yield improvements to regain competitiveness — and the company expects HBM4 to be a key revenue driver as memory demand resurges.
At a recent ISSCC Korea briefing, SK Hynix Fellow Kim Dong-gyun noted that DRAM evolution is being driven by the dual needs of bandwidth and power efficiency. Samsung’s move to show a higher-performing HBM4 at ISSCC 2026 underscores that trend and sets the stage for fresh competition in next-gen memory for AI servers.
Expect more technical detail and real-world performance metrics when Samsung officially presents the chip at ISSCC next February — a must-watch moment for hardware designers and cloud providers chasing higher throughput for AI workloads.
Source: sammobile
Leave a Comment