Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Nahsung-
dc.contributor.authorShin, Dongyeob-
dc.contributor.authorChoi, Wonseok-
dc.contributor.authorKim, Geonho-
dc.contributor.authorPark, Jongsun-
dc.date.accessioned2021-11-17T18:40:26Z-
dc.date.available2021-11-17T18:40:26Z-
dc.date.created2021-08-30-
dc.date.issued2021-07-
dc.identifier.issn2162-237X-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/127783-
dc.description.abstractFor successful deployment of deep neural networks (DNNs) on resource-constrained devices, retraining-based quantization has been widely adopted to reduce the number of DRAM accesses. By properly setting training parameters, such as batch size and learning rate, bit widths of both weights and activations can be uniformly quantized down to 4 bit while maintaining full precision accuracy. In this article, we present a retraining-based mixed-precision quantization approach and its customized DNN accelerator to achieve high energy efficiency. In the proposed quantization, in the middle of retraining, an additional bit (extra quantization level) is assigned to the weights that have shown frequent switching between two contiguous quantization levels since it means that both quantization levels cannot help to reduce quantization loss. We also mitigate the gradient noise that occurs in the retraining process by taking a lower learning rate near the quantization threshold. For the proposed novel mixed-precision quantized network (MPQ-network), we have implemented a customized accelerator using a 65-nm CMOS process. In the accelerator, the proposed processing elements (PEs) can be dynamically reconfigured to process variable bit widths from 2 to 4 bit for both weights and activations. The numerical results show that the proposed quantization can achieve 1.37x better compression ratio for VGG-9 using CIFAR-10 data set compared with a uniform 4-bit (both weights and activations) model without loss of classification accuracy. The proposed accelerator also shows 1.29x of energy savings for VGG-9 using the CIFAR-10 data set over the state-of-the-art accelerator.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleExploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design-
dc.typeArticle-
dc.contributor.affiliatedAuthorPark, Jongsun-
dc.identifier.doi10.1109/TNNLS.2020.3008996-
dc.identifier.scopusid2-s2.0-85111951645-
dc.identifier.wosid000670541500011-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.32, no.7, pp.2925 - 2938-
dc.relation.isPartOfIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS-
dc.citation.titleIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS-
dc.citation.volume32-
dc.citation.number7-
dc.citation.startPage2925-
dc.citation.endPage2938-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Hardware & Architecture-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordAuthorDeep neural network (DNN) accelerator-
dc.subject.keywordAuthorenergy-efficient accelerator-
dc.subject.keywordAuthormodel compression-
dc.subject.keywordAuthorquantization-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Jong sun photo

Park, Jong sun
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE