When Nvidia CEO Jensen Huang confirmed at CES 2026 that its next-generation AI processor, Vera Rubin, had entered full production, the message to the memory industry was immediate. The move effectively ignited a new competitive cycle in sixth-generation high-bandwidth memory (HBM4), as suppliers race to lock in design wins for Nvidia's post-Blackwell platforms.
Micron targets 30% HBM4 capacity share
Among the suppliers, Micron Technology is emerging as the most aggressive mover. South Korean industry sources cited by ETNews say the US memory maker plans to lift HBM4 capacity to 15,000 wafers per month in 2026. Based on Korean securities estimates that Micron's total HBM output is about 55,000 wafers per month, HBM4 would represent roughly 30% of overall capacity, signaling a decisive shift toward next-generation products.
Industry insiders say Micron has already begun equipment investments, positioning itself to respond quickly as early HBM4 demand materializes. The timing aligns with Nvidia's confirmation that Vera Rubin has moved beyond sampling and validation into full-scale production.
Samsung and SK Hynix join the race
Micron will not be alone. Samsung and SK Hynix are also preparing HBM4 for Vera Rubin, with all three suppliers' products currently under evaluation by Nvidia as supply schedules and deployment timelines are coordinated. Market expectations point to February 2026 as the start of large-scale HBM4 co-supply for Nvidia platforms.
Historically, Micron has trailed its South Korean rivals in HBM capacity. The 2026 expansion is widely seen as an attempt at a strategic reversal, pairing scale with its long-standing strength in low-power memory design.
Ramp-up begins in second quarter 2026
That direction had already surfaced in Micron's guidance. During its December 17, 2025, earnings call, CEO Sanjay Mehrotra said the company would begin ramping HBM4 output from the second quarter of 2026, adding that yield improvement is progressing faster than HBM3E. With several new fabs already incorporated into its roadmap, Micron expects momentum to accelerate toward the end of 2026.
Micron's confidence has been building since late last year. As disclosed during its fiscal first-quarter earnings briefing and reported by ZDNet Korea, the company has expanded its HBM customer base to three major clients while preparing for full-scale HBM4 production in 2026 after skipping HBM3. Executives have also cited strong customer feedback on its 12-high HBM3E, supplied to Nvidia's Blackwell accelerators, highlighting lower power consumption as a key differentiator as AI systems scale.
Early HBM4 supply already contracted
Crucially, Micron has indicated that near-term HBM supply is already fully contracted. According to disclosures cited by The Elec, the company has finalized price and volume agreements for upcoming HBM shipments, including early HBM4, and says its HBM4 exceeds 11 gigabits per second (Gbps), outperforming baseline JEDEC specifications and Nvidia's operating targets while stabilizing yields faster than the previous generation.
Taken together, Nvidia's move to full production on Vera Rubin and Micron's capacity-first HBM4 push mark a clear inflection point for the AI memory market. Whether Micron's mix of low-power design, early customer lock-in, and aggressive scale-up can truly challenge Samsung and SK Hynix will become clearer as HBM4 volumes ramp through 2026.
What is already clear is that in the HBM4 era, capacity is no longer a background variable. It is a frontline competitive weapon.
Article edited by Jerry Chen

