Robust CNN Compression Framework for Security-Sensitive Embedded Systems
- Authors
- Lee, Jeonghyun; Lee, Sangkyun
- Issue Date
- 2월-2021
- Publisher
- MDPI
- Keywords
- model compression; adversarial robustness; weight pruning; adversarial training; distillation; embedded system; secure AI
- Citation
- APPLIED SCIENCES-BASEL, v.11, no.3, pp.1 - 17
- Indexed
- SCIE
SCOPUS
- Journal Title
- APPLIED SCIENCES-BASEL
- Volume
- 11
- Number
- 3
- Start Page
- 1
- End Page
- 17
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/137801
- DOI
- 10.3390/app11031093
- ISSN
- 2076-3417
- Abstract
- Convolutional neural networks (CNNs) have achieved tremendous success in solving complex classification problems. Motivated by this success, there have been proposed various compression methods for downsizing the CNNs to deploy them on resource-constrained embedded systems. However, a new type of vulnerability of compressed CNNs known as the adversarial examples has been discovered recently, which is critical for security-sensitive systems because the adversarial examples can cause malfunction of CNNs and can be crafted easily in many cases. In this paper, we proposed a compression framework to produce compressed CNNs robust against such adversarial examples. To achieve the goal, our framework uses both pruning and knowledge distillation with adversarial training. We formulate our framework as an optimization problem and provide a solution algorithm based on the proximal gradient method, which is more memory-efficient than the popular ADMM-based compression approaches. In experiments, we show that our framework can improve the trade-off between adversarial robustness and compression rate compared to the existing state-of-the-art adversarial pruning approach.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - School of Cyber Security > Department of Information Security > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.