Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

What is relevant in a text document?": An interpretable machine learning approach

Authors
Arras, LeilaHorn, FranziskaMontavon, GregoireMueller, Klaus-RobertSamek, Wojciech
Issue Date
11-8월-2017
Publisher
PUBLIC LIBRARY SCIENCE
Citation
PLOS ONE, v.12, no.8
Indexed
SCIE
SCOPUS
Journal Title
PLOS ONE
Volume
12
Number
8
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/82566
DOI
10.1371/journal.pone.0181142
ISSN
1932-6203
Abstract
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE