인피니언 Dec
반도체 AI 보안 인더스트리 4.0 SDV 스마트 IoT 컴퓨터 통신 특수 가스 소재 및 장비 유통 e4ds plus

Low-power, miniaturized HBM2 that controls large-capacity memory

기사입력2018.08.28 10:14

Limits to increasing memory bandwidth
Overcome with HMC, HBM2, etc.
Meets low power, miniaturization trends



Explosion of system requirements

Over the past several years, memory bandwidth and system requirements have increased proportionally. However, with the recent developments in AI, IoT, blockchain, RADAR, 5G, 8K video, etc., the memory processing required across industries has rapidly increased, while the increase in memory bandwidth has not kept pace.

Memory bandwidth growth is slowing

The bandwidth of the most widely used DDR memory has doubled with each generation over the past 20 years. However, commercialization of DDR5 is still far away, and it is difficult to keep up with recent trends with just a doubling.

DDR5 memory is expected to be released in 2019

The limitation of memory bandwidth is caused by the limitation of input/output. However, using more components is not an alternative. As the number of components increases, parallel signal transmission occurs. Adding buffers to process this signal at high speed would exceed the power capacity provided by the system and take up space.

There is a way to directly integrate memory into computing devices. FPGAs also contain memory blocks. However, they have specialized characteristics and do not provide sufficient quantities. Standard high-capacity memory technology can be directly integrated into FPGAs, but there are limitations to their capacity. Therefore, in order to meet the ever-increasing system requirements, a new technological innovation that breaks through the existing universal memory hardware structure is needed.


Increasing memory bandwidth in the low-power miniaturization trend
The industry is focusing on 3D stacked memory rather than DDR3/4. This is because 3D stacked memory can be implemented in a small form factor with a large capacity. There are two ways to use 3D stacked memory.

Micron's HMC Architecture

This is Micron's HMC (Hybrid Memory Cube) developed in 2011. It operates at up to 320 GB/s using 3D TSV (Through Silicon Via) DRAM stacking technology. The stacked memory is connected to the base die, which includes the controller within the component. It is configured as a separate device on the board, so it also requires separate power and termination circuits and consumes relatively more power.

AMD and SK Hynix's 1st Generation HBM

AMD and SK Hynix's HBM (High Bandwidth Memory), developed in 2013, uses the same method as HMC in connecting memory layers using TSV technology. It adopts a parallel interface and can implement high bandwidth using a parallel bus. In addition, it is integrated inside computing products to avoid long traces on the board.


The Light of HBM2
HBM2, announced by Samsung Electronics and SK Hynix in August 2016, is the second-generation HBM specification adopted as a standard by JEDEC (Joint Electron Device Engineering Council) in October of that year. HBM2 stacking operates at up to 256 GB/s and operates at up to 1 GHz when stacked up to 4 layers. It can be used as external memory and has a 1024-bit parallel interface. And unlike HMC, it does not include a controller in the base die. The controller is configured in the host device. To use HBM2, technology is needed to connect large amounts of parallel data within the chip.

Samsung Electronics' HBM2 DRAM configuration example

Traditional DRAMs are connected with PCB traces. On the other hand, HBM2 is connected on-die within the package. HBM2 has a small footprint and consumes less power than conventional DRAMs. And it does not require PCB traces or external termination resistors.

The e4ds September 13 webinar, 'High-bandwidth memory solutions using Intel FPGAs', will examine the limitations of memory bandwidth and the characteristics of HBM. It will also cover how to optimize the HBM controller for the application, focusing on Intel's FPGAs that integrate HBM.

It seems like AlphaGo's impact was just yesterday, but AI and IoT have already deeply penetrated our daily lives. The automobile industry is predicting that self-driving cars will be commercialized by 2025. This is because of the advancement of data analysis technology. The amount of data that devices and data centers must process is increasing exponentially. This is why we are curious about the future that HBM2, which has a memory bandwidth that surpasses existing DDR RAM, will bring.
이수민 기자
기사 전체보기

인텔 FPGA 를 사용한 대용량 대역폭 메모리 솔루션
2018-09-13 10:30~12:19
Intel / 홍재석 부장