Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Low-overhead inverted LUT design for bounded DNN activation functions on floating-point vector ALUs

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Seok Young-
dc.contributor.authorKim, Chang Hyun-
dc.contributor.authorLee, Won Joon-
dc.contributor.authorPark, Il-
dc.contributor.authorKim, Seon Wook-
dc.date.accessioned2022-08-10T09:40:44Z-
dc.date.available2022-08-10T09:40:44Z-
dc.date.created2022-08-10-
dc.date.issued2022-09-
dc.identifier.issn0141-9331-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/142722-
dc.description.abstractAn inference engine uses floating-point numbers to provide high accuracy in deep neural network computing despite its computing resource limitations. However, the computation for non-linear activation functions occurs the performance bottleneck, and we may alleviate it by adopting a lookup table (LUT) method. However, the floating-point number system's characteristic, where intervals between mantissa numbers differ depending on their exponent values, makes it challenging to calculate LUT index values and produce the error-tolerant outputs. This paper proposes a floating-point-based lookup table (FP-LUT) that produces minimal errors and requires negligible hardware cost, especially for vector arithmetic logic units (ALUs), using bfloat16 recently proposed for both inference and training processes. Instead of calculating the index using the function input value, we apply the principle of an inverse function for our design, especially targeting bounded DNN activation functions. We divide a range of function output values linearly by the number of LUT entry sizes and store the corresponding input values in the LUT. Then, we compare the incoming input value with the stored LUT values, find the corresponding address, and convert it into an FP format for the output. We applied our 32-entry FP-LUT to the in-house 8-way bfloat16 MAC unit to support four DNN activation functions: logistic sigmoid, hyper-tangent, soft sign, and ISRU, which incurs only 1.22% and 0.46% of the area and power consumption overhead. Our accuracy analysis shows that with only an entry size of 1/8 compared to state-of-the-art 16-bit fixed-point LUT methods and the small logic overhead, FP-LUT reduces the average errors by 51.8%, 28.4%, 14.4%, and 26.1% in those functions on our test datasets, respectively. Additionally, we show that our scheme satisfies all application-defined accuracy.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherELSEVIER-
dc.titleLow-overhead inverted LUT design for bounded DNN activation functions on floating-point vector ALUs-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Seon Wook-
dc.identifier.doi10.1016/j.micpro.2022.104592-
dc.identifier.scopusid2-s2.0-85133870087-
dc.identifier.wosid000826734500001-
dc.identifier.bibliographicCitationMICROPROCESSORS AND MICROSYSTEMS, v.93-
dc.relation.isPartOfMICROPROCESSORS AND MICROSYSTEMS-
dc.citation.titleMICROPROCESSORS AND MICROSYSTEMS-
dc.citation.volume93-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Hardware & Architecture-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordAuthorLookuptable-
dc.subject.keywordAuthorBfloat16-
dc.subject.keywordAuthorActivationfunctions-
dc.subject.keywordAuthorDeepneuralnetworks-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Seon Wook photo

Kim, Seon Wook
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE