Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Low-overhead inverted LUT design for bounded DNN activation functions on floating-point vector ALUs

Authors
Kim, Seok YoungKim, Chang HyunLee, Won JoonPark, IlKim, Seon Wook
Issue Date
Sep-2022
Publisher
ELSEVIER
Keywords
Lookuptable; Bfloat16; Activationfunctions; Deepneuralnetworks
Citation
MICROPROCESSORS AND MICROSYSTEMS, v.93
Indexed
SCIE
SCOPUS
Journal Title
MICROPROCESSORS AND MICROSYSTEMS
Volume
93
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/142722
DOI
10.1016/j.micpro.2022.104592
ISSN
0141-9331
Abstract
An inference engine uses floating-point numbers to provide high accuracy in deep neural network computing despite its computing resource limitations. However, the computation for non-linear activation functions occurs the performance bottleneck, and we may alleviate it by adopting a lookup table (LUT) method. However, the floating-point number system's characteristic, where intervals between mantissa numbers differ depending on their exponent values, makes it challenging to calculate LUT index values and produce the error-tolerant outputs. This paper proposes a floating-point-based lookup table (FP-LUT) that produces minimal errors and requires negligible hardware cost, especially for vector arithmetic logic units (ALUs), using bfloat16 recently proposed for both inference and training processes. Instead of calculating the index using the function input value, we apply the principle of an inverse function for our design, especially targeting bounded DNN activation functions. We divide a range of function output values linearly by the number of LUT entry sizes and store the corresponding input values in the LUT. Then, we compare the incoming input value with the stored LUT values, find the corresponding address, and convert it into an FP format for the output. We applied our 32-entry FP-LUT to the in-house 8-way bfloat16 MAC unit to support four DNN activation functions: logistic sigmoid, hyper-tangent, soft sign, and ISRU, which incurs only 1.22% and 0.46% of the area and power consumption overhead. Our accuracy analysis shows that with only an entry size of 1/8 compared to state-of-the-art 16-bit fixed-point LUT methods and the small logic overhead, FP-LUT reduces the average errors by 51.8%, 28.4%, 14.4%, and 26.1% in those functions on our test datasets, respectively. Additionally, we show that our scheme satisfies all application-defined accuracy.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Seon Wook photo

Kim, Seon Wook
College of Engineering (School of Electrical Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE