Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Towards explaining anomalies: A deep Taylor decomposition of one-class models

Authors
Kauffmann, JacobMueller, Klaus-RobertMontavon, Gregoire
Issue Date
5월-2020
Publisher
ELSEVIER SCI LTD
Keywords
Outlier detection; Explainable machine learning; Deep Taylor decomposition; Kernel machines; Unsupervised learning
Citation
PATTERN RECOGNITION, v.101
Indexed
SCIE
SCOPUS
Journal Title
PATTERN RECOGNITION
Volume
101
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/56110
DOI
10.1016/j.patcog.2020.107198
ISSN
0031-3203
Abstract
Detecting anomalies in the data is a common machine learning task, with numerous applications in the sciences and industry. In practice, it is not always sufficient to reach high detection accuracy, one would also like to be able to understand why a given data point has been predicted to be anomalous. We propose a principled approach for one-class SVMs (OC-SVM), that draws on the novel insight that these models can be rewritten as distance/pooling neural networks. This 'neuralization' step lets us apply deep Taylor decomposition (DTD), a methodology that leverages the model structure in order to quickly and reliably explain decisions in terms of input features. The proposed method (called 'OC-DTD') is applicable to a number of common distance-based kernel functions, and it outperforms baselines such as sensitivity analysis, distance to nearest neighbor, or edge detection. (C) 2020 The Authors. Published by Elsevier Ltd.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE