Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Evaluating the Visualization of What a Deep Neural Network Has Learned

Authors
Samek, WojciechBinder, AlexanderMontavon, GregoireLapuschkin, SebastianMueller, Klaus-Robert
Issue Date
11월-2017
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Convolutional neural networks; explaining classification; image classification; interpretable machine learning; relevance models
Citation
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.28, no.11, pp.2660 - 2673
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume
28
Number
11
Start Page
2660
End Page
2673
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/81770
DOI
10.1109/TNNLS.2016.2599820
ISSN
2162-237X
Abstract
Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE