CONNECT WITH US
NEWS TAGGED HBM4
Tuesday 17 March 2026
HBM4 showdown at GTC 2026: SK Hynix, Samsung, and Micron battle for AI memory supremacy
The 2026 NVIDIA Global Technology Conference (GTC) has transcended its origins as a developer forum to become the ultimate proving ground for the high-bandwidth memory (HBM) indust...
Tuesday 17 March 2026
Micron enters high-volume production of HBM4 and PCIe Gen6 SSDs for Nvidia platforms
At the Nvidia GTC 2026 conference, Micron Technology signaled a major push in the AI hardware race, announcing high-volume production of memory and storage components purpose-built...
Tuesday 17 March 2026
Nvidia GTC 2026: Samsung unveils HBM4E, showcasing comprehensive AI solutions, Nvidia partnership, and vision
Samsung Electronics outlined the full range of AI computing technologies it will present at Nvidia GTC 2026 in San Jose, California, March 16-19, highlighting memory, logic, foundry,...
Tuesday 17 March 2026
SK hynix to Showcase AI Memory Leadership at NVIDIA GTC 2026

SK hynix Inc. (or "the company", www.skhynix.com) announced today that it is participating in GTC 2026, held from March 16 to 19 in San Jose, California. 

Monday 16 March 2026
DRAM prices surge 180% as HBM competition shifts to profitability
DRAM and NAND flash prices have soared by as much as 180% since the Lunar New Year, amid worsening memory chip supply shortages that are expected to persist at least until late 2027...
Friday 13 March 2026
Samsung targets 2nm process for HBM4E base die to boost AI chip edge
Samsung Electronics is reportedly planning to adopt a 2-nanometer (nm) process for the base die of its next-generation high-bandwidth memory (HBM), HBM4E, aiming to enhance its technological...
Wednesday 11 March 2026
Nvidia audits Samsung HBM4 packaging for Rubin GPUs, report says
Nvidia has reportedly conducted a series of closely spaced visits to Samsung Electronics' semiconductor packaging campus in Cheonan, South Korea, signaling that verification of the...
Monday 9 March 2026
Industry mulls thicker HBM standards for 20-layer stacks
Global semiconductor companies are discussing relaxing thickness standards for next-generation high-bandwidth memory (HBM) as the industry moves toward higher stack counts in future...
Monday 9 March 2026
Nvidia halts China-bound H200 production, shifts TSMC capacity to Vera Rubin

Nvidia has halted production of artificial intelligence (AI) chips intended for the Chinese market and redirected manufacturing capacity...

Friday 6 March 2026
SK Group chairman reportedly to meet Nvidia CEO at GTC
SK Group chairman Chey Tae-Won is expected to attend Nvidia GTC 2026 in San Jose, California, on March 16, 2026, where he will likely hold high-level talks with Nvidia CEO Jensen Huang...
Wednesday 4 March 2026
Samsung expands AI chip push with Pyeongtaek P5 and Texas foundry ramp

Samsung Electronics is accelerating the expansion of its Pyeongtaek semiconductor campus in South Korea, aiming to establish the site...

Tuesday 3 March 2026
DRAM prices to surge up to 70% in 2Q26; Nvidia GTC 2026 ignites AI memory rally
AI-driven data centre expansion has pushed the memory market beyond traditional supply-demand cycles, creating a structural shortage that continues to deepen. Industry sources say...
Monday 2 March 2026
Nvidia's Vera Rubin platform faces HBM4, cooling, and software hurdles ahead of ramp
Nvidia's next-generation Vera Rubin platform has moved from public unveiling to early customer sampling, with the company projecting a broader production ramp later this year. Both...
Thursday 26 February 2026
Samsung reportedly exits 2D NAND, converts Hwaseong Line 12 to 1c DRAM end fab
Samsung Electronics will halt 2D NAND flash production at its Hwaseong Line 12 and convert the facility into a 1c DRAM end fab, formally exiting planar NAND manufacturing and reallocating...
Thursday 26 February 2026
SK Hynix and Sandisk move to standardize HBF, shaking up AI memory ecosystem
SK Hynix and Sandisk have launched a consortium to standardize High Bandwidth Flash (HBF), positioning the technology as a next-generation memory layer for AI inference and heightening...