Moving away from a CPU-centric structure centered around a huge shared memory pool
Zen-Z Consortium Leads Memory-Centric Computing This year, memory led the global semiconductor market boom. The reason why memory boomed was because supply could not meet demand, causing memory prices to rise. So what changes will the memory market face as the era of artificial intelligence (AI) approaches?
Mirae Asset Daewoo Research Analyst Do Hyun-woo said, “I think it will be okay until the first half of next year. DRAM prices are continuing to rise due to restraint in capacity investment and server demand. NAND prices have started to rise due to demand for the iPhone X. Memory supply and demand will be good until the first half of 2018,” and “Chinese memory semiconductor companies are expected to produce small quantities in the fourth quarter of next year, so this could be a variable.”
With demand for memory semiconductors expected to be good, what are some issues we should pay attention to? Researcher Do emphasized that we should keep an eye on memory-centric computing.
Traditional computer systems are centered around the processor and memory, so it is fast to find data within a server, but it takes a long time to find data in a parallel structured neighboring server. This structure is not enough to process AI data. The proposed method is a structure that separates the memory and processor. The speed of finding data within a server may be somewhat slower, but the speed of finding information from other servers is faster. Changing the structure like this requires more memory than the existing method.

HPE's Prototype Server for Memory-Centric Computing, 'The Machine'
The structure of memory-centric computing, which separates CPUs, FPGAs, GPUs, DRAMs, and SSDs, is centered around creating a huge shared memory pool. In the memory pool, all processors are connected in parallel and there is no need to transfer data to other parts of the system. As the number of memories in the shared memory pool increases, the price of expensive DRAMs is becoming a problem. Storage Class Memory (SCM), a next-generation memory, has been pointed out as an alternative to solve this problem. It is a non-volatile memory that maintains the speed of DRAMs.
Intel is currently mass-producing SCM. Intel has created Optane memory, a type of SCM, using 3D XPoint technology jointly developed with Micron. It is also creating its own standards that do not follow the JEDEC standards.
Other companies, excluding Intel, have formed the GEN-Z consortium. It includes most companies except for data center companies, such as Samsung Electronics, SK Hynix, ARM, and Xilinx. Their goal is to develop next-generation memories such as SCM and transition to memory-centric computing.
Research Fellow Do said, “HPE unveiled ‘The Machine,’ a prototype for implementing memory-centric computing, in March of this year. “It has 40 nodes and a memory pool capacity of 160TB,” he said. “Intel’s 3D XPoint serves as a middle ground between NAND flash and DRAM in terms of price and capacity. It will have an impact on the market starting next year.”
The memory that has been attracting attention in the industry recently is HBM (High Bandwidth Memory). A 2.5D package is a package that combines logic semiconductors such as CPUs or GPUs with memory. The DRAM that goes into this is HBM. The 2.5D package is fast, but has a serious bottleneck phenomenon when transmitting data between chips. HBM uses TSV (Through Silicon Via), a 3D stacking technology, to vertically stack DRAMs, so fast speeds can be expected due to direct connections between memories. However, the yield is low and the price is high.
Research Fellow Do said, “TSV process is difficult and expensive. It currently accounts for 1% of the market, but we expect it to increase to 30%,” and predicted, “Demand for post-processing equipment will also increase accordingly.”