Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning

Authors
Jang, YunhoKang, GyuseongKim, TaehwanSeo, YeongkyoLee, Kyung-JinPark, Byong-GukPark, Jongsun
Issue Date
1-9월-2022
Publisher
IEEE COMPUTER SOC
Keywords
Synapses; Neurons; Computer architecture; Switches; Hardware; Magnetization; Magnetic tunneling; Spin-orbit torque device; spiking neural network; stochastic spike-timing-dependent plasticity; on-chip learning
Citation
IEEE TRANSACTIONS ON COMPUTERS, v.71, no.9, pp.2022 - 2035
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON COMPUTERS
Volume
71
Number
9
Start Page
2022
End Page
2035
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/143737
DOI
10.1109/TC.2021.3119180
ISSN
0018-9340
Abstract
Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2(N) levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78 mu J/image (training) and 0.23 mu J/image (inference) of energy with an area of 1.12mm(2).
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Jong sun photo

Park, Jong sun
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE