South Korean reminiscence huge SK Hynix has announced it has begun the mass manufacturing of the realm’s first 12-layer HBM3E, that beneficial properties a full reminiscence ability of 36GB, a huge develop from the previous 24GB ability in the 8-layer configuration.
This fresh manufacture became once made conceivable by reducing the thickness of every and each DRAM chip by 40%, allowing extra layers to be stacked whereas asserting the identical overall dimension. The firm plans to delivery quantity shipments by the pause of 2024.
The HBM3E reminiscence supports a bandwidth of 9600 MT/s, translating to an efficient accelerate of 1.22 TB/s if previous in an eight-stack configuration. The come makes it supreme for facing LLMs and AI workloads that require both accelerate and excessive ability. The flexibility to process extra records at sooner rates permits AI devices to accelerate extra successfully.
Nvidia and AMD hardware
For superior reminiscence stacking, SK Hynix employs innovative packaging technologies, including Through Silicon Through (TSV) and the Mass Reflow Molded Underfill (MR-MUF) process. These solutions are obligatory for asserting the structural integrity and warmth dissipation required for valid, excessive-efficiency operation in the fresh HBM3E. The improvements in warmth dissipation efficiency are in particular valuable for asserting reliability all over intensive AI processing responsibilities.
Moreover to its increased accelerate and ability, the HBM3E is designed to present enhanced stability, with SK Hynix’s proprietary packaging processes making sure minimal warpage all over stacking. The firm’s MR-MUF skills permits for better administration of interior rigidity, reducing the potentialities of mechanical screw ups and making sure lengthy-term sturdiness.
Early sampling for this 12-layer HBM3E product started in March 2024, with Nvidia’s Blackwell Ultra GPUs and AMD’s Instinct MI325X accelerators expected to be among the first to employ this enhanced reminiscence, taking excellent thing about up to 288GB of HBM3E to bolster complex AI computations. SK Hynix recently rejected a $374 million superior charge from an unknown firm to verify it can perchance present Nvidia with ample HMB for its in-question AI hardware.
“SK Hynix has once extra damaged via technological limits demonstrating our switch leadership in AI reminiscence,” stated Justin Kim, President (Head of AI Infra) at SK Hynix. “We can proceed our region as the No.1 world AI reminiscence provider as we regularly prepare subsequent-generation reminiscence merchandise to conquer the challenges of the AI skills.”
Register to the TechRadar Pro publication to build up the full top news, idea, facets and guidance your switch needs to prevail!