Scrutinizing XAI using linear ground-truth data with suppressor variables
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wilming, Rick | - |
dc.contributor.author | Budding, Celine | - |
dc.contributor.author | Mueller, Klaus-Robert | - |
dc.contributor.author | Haufe, Stefan | - |
dc.date.accessioned | 2022-08-13T21:40:44Z | - |
dc.date.available | 2022-08-13T21:40:44Z | - |
dc.date.created | 2022-08-12 | - |
dc.date.issued | 2022-05 | - |
dc.identifier.issn | 0885-6125 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/143076 | - |
dc.description.abstract | Machine learning (ML) is increasingly often used to inform high-stakes decisions. As complex ML models (e.g., deep neural networks) are often considered black boxes, a wealth of procedures has been developed to shed light on their inner workings and the ways in which their predictions come about, defining the field of 'explainable AI' (XAI). Saliency methods rank input features according to some measure of 'importance'. Such methods are difficult to validate since a formal definition of feature importance is, thus far, lacking. It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables). To avoid misinterpretations due to such behavior, we propose the actual presence of such an association as a necessary condition and objective preliminary definition for feature importance. We carefully crafted a ground-truth dataset in which all statistical dependencies are well-defined and linear, serving as a benchmark to study the problem of suppressor variables. We evaluate common explanation methods including LRP, DTD, PatternNet, PatternAttribution, LIME, Anchors, SHAP, and permutation-based methods with respect to our objective definition. We show that most of these methods are unable to distinguish important features from suppressors in this setting. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | SPRINGER | - |
dc.subject | MODELS | - |
dc.subject | DECISIONS | - |
dc.title | Scrutinizing XAI using linear ground-truth data with suppressor variables | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Mueller, Klaus-Robert | - |
dc.identifier.doi | 10.1007/s10994-022-06167-y | - |
dc.identifier.scopusid | 2-s2.0-85128244413 | - |
dc.identifier.wosid | 000782327600001 | - |
dc.identifier.bibliographicCitation | MACHINE LEARNING, v.111, no.5, pp.1903 - 1923 | - |
dc.relation.isPartOf | MACHINE LEARNING | - |
dc.citation.title | MACHINE LEARNING | - |
dc.citation.volume | 111 | - |
dc.citation.number | 5 | - |
dc.citation.startPage | 1903 | - |
dc.citation.endPage | 1923 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | MODELS | - |
dc.subject.keywordPlus | DECISIONS | - |
dc.subject.keywordAuthor | Explainable AI | - |
dc.subject.keywordAuthor | Saliency methods | - |
dc.subject.keywordAuthor | Ground truth | - |
dc.subject.keywordAuthor | Benchmark | - |
dc.subject.keywordAuthor | Linear classification | - |
dc.subject.keywordAuthor | Suppressor variables | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.