Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Design of Processing-"Inside''-Memory Optimized for DRAM Behaviors

Authors
Lee, Won JunKim, Chang HyunPaik, YoonahPark, JongsunPark, IlKim, Seon Wook
Issue Date
2019
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Processing-in-memory; DRAM; parallelism; matrix-vector multiplication
Citation
IEEE ACCESS, v.7, pp.82633 - 82648
Indexed
SCIE
SCOPUS
Journal Title
IEEE ACCESS
Volume
7
Start Page
82633
End Page
82648
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/68945
DOI
10.1109/ACCESS.2019.2924240
ISSN
2169-3536
Abstract
The computing domain of today's computer systems is moving very fast from arithmetic to data processing as data volumes grow exponentially. As a result, processing-in-memory (PIM) studies have been actively conducted to support the data processing in or near memory devices to address the limited bandwidth and high power consumption due to data movement between CPU/GPU and memory. However, most PIM studies so far have been conducted in a way that the processing units are designed only as an accelerator on the base die of 3D-stacked DRAM, not involved inside memory while not servicing the standard DRAM requests during the PIM execution. Therefore, in this paper, we show how to design and operate the PIM computing units inside DRAM by effectively coordinating with standard DRAM operations while achieving the full computing performance and minimizing the implementation cost. To make our goals, we extend a standard DRAM state diagram to depict the PIM behaviors in the same way as standard DRAM commands are scheduled and operated on the DRAM devices and exploit several levels of parallelism to overlap memory and computing operations. Also, we present how the entire architecture layers from applications to operating systems, memory controllers, and PIM devices should work together for the effective execution by applying our approaches to our experiment platform. In our HBM2-based experimental platform to include 16-cycle MAC (Multiply-and-Add) units and 8-cycle reducers for a matrix-vector multiplication, we achieved 406% and 35.2% faster performance by the all-bank and the per-bank schedulings, respectively, at (1024 x 1024) x (1024 x 1) 8-bit integer matrix-vector multiplication than the execution of only its operand burst reads assuming the external full DRAM bandwidth. It should be noted that the performance of the PIM on a base die of a 3D-stacked memory cannot be better than that provided by the full bandwidth in any case.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Seon Wook photo

Kim, Seon Wook
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE