Pruning by explaining: A novel criterion for deep neural network pruning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yeom, S.-K. | - |
dc.contributor.author | Seegerer, P. | - |
dc.contributor.author | Lapuschkin, S. | - |
dc.contributor.author | Binder, A. | - |
dc.contributor.author | Wiedemann, S. | - |
dc.contributor.author | Müller, K.-R. | - |
dc.contributor.author | Samek, W. | - |
dc.date.accessioned | 2021-12-01T23:42:10Z | - |
dc.date.available | 2021-12-01T23:42:10Z | - |
dc.date.created | 2021-08-31 | - |
dc.date.issued | 2021-07 | - |
dc.identifier.issn | 0031-3203 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/128751 | - |
dc.description.abstract | The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning. © 2021 The Authors | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | Elsevier Ltd | - |
dc.subject | Convolutional neural networks | - |
dc.subject | Digital storage | - |
dc.subject | Iterative methods | - |
dc.subject | Transfer learning | - |
dc.subject | Application scenario | - |
dc.subject | Computational costs | - |
dc.subject | Gradient computation | - |
dc.subject | Hyperparameters | - |
dc.subject | Interpretability | - |
dc.subject | Model compression | - |
dc.subject | Relevance score | - |
dc.subject | State of the art | - |
dc.subject | Deep neural networks | - |
dc.title | Pruning by explaining: A novel criterion for deep neural network pruning | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Müller, K.-R. | - |
dc.identifier.doi | 10.1016/j.patcog.2021.107899 | - |
dc.identifier.scopusid | 2-s2.0-85101752375 | - |
dc.identifier.wosid | 000639745600006 | - |
dc.identifier.bibliographicCitation | Pattern Recognition, v.115 | - |
dc.relation.isPartOf | Pattern Recognition | - |
dc.citation.title | Pattern Recognition | - |
dc.citation.volume | 115 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | Convolutional neural networks | - |
dc.subject.keywordPlus | Digital storage | - |
dc.subject.keywordPlus | Iterative methods | - |
dc.subject.keywordPlus | Transfer learning | - |
dc.subject.keywordPlus | Application scenario | - |
dc.subject.keywordPlus | Computational costs | - |
dc.subject.keywordPlus | Gradient computation | - |
dc.subject.keywordPlus | Hyperparameters | - |
dc.subject.keywordPlus | Interpretability | - |
dc.subject.keywordPlus | Model compression | - |
dc.subject.keywordPlus | Relevance score | - |
dc.subject.keywordPlus | State of the art | - |
dc.subject.keywordPlus | Deep neural networks | - |
dc.subject.keywordAuthor | Convolutional neural network (CNN) | - |
dc.subject.keywordAuthor | Explainable AI (XAI) | - |
dc.subject.keywordAuthor | Interpretation of models | - |
dc.subject.keywordAuthor | Layer-wise relevance propagation (LRP) | - |
dc.subject.keywordAuthor | Pruning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.