Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

Full metadata record
DC Field Value Language
dc.contributor.authorSamek, Wojciech-
dc.contributor.authorMontavon, Gregoire-
dc.contributor.authorLapuschkin, Sebastian-
dc.contributor.authorAnders, Christopher J.-
dc.contributor.authorMueller, Klaus-Robert-
dc.date.accessioned2021-11-23T17:40:44Z-
dc.date.available2021-11-23T17:40:44Z-
dc.date.created2021-08-30-
dc.date.issued2021-03-
dc.identifier.issn0018-9219-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/128500-
dc.description.abstractWith the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI). Interpretability and explanation methods for gaining a better understanding of the problem-solving abilities and strategies of nonlinear ML, in particular, deep neural networks, are, therefore, receiving increased attention. In this work, we aim to: 1) provide a timely overview of this active emerging field, with a focus on "post hoc" explanations, and explain its theoretical foundations; 2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations; 3) outline best practice aspects, i.e., how to best include interpretation methods into the standard usage of ML; and 4) demonstrate successful usage of XAI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of ML.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectBLACK-BOX-
dc.subjectMODELS-
dc.subjectCLASSIFICATION-
dc.subjectEXPLANATION-
dc.subjectPREDICTION-
dc.subjectDECISIONS-
dc.subjectIMAGES-
dc.titleExplaining Deep Neural Networks and Beyond: A Review of Methods and Applications-
dc.typeArticle-
dc.contributor.affiliatedAuthorMueller, Klaus-Robert-
dc.identifier.doi10.1109/JPROC.2021.3060483-
dc.identifier.scopusid2-s2.0-85101763532-
dc.identifier.wosid000626523700003-
dc.identifier.bibliographicCitationPROCEEDINGS OF THE IEEE, v.109, no.3, pp.247 - 278-
dc.relation.isPartOfPROCEEDINGS OF THE IEEE-
dc.citation.titlePROCEEDINGS OF THE IEEE-
dc.citation.volume109-
dc.citation.number3-
dc.citation.startPage247-
dc.citation.endPage278-
dc.type.rimsART-
dc.type.docTypeReview-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordPlusBLACK-BOX-
dc.subject.keywordPlusMODELS-
dc.subject.keywordPlusCLASSIFICATION-
dc.subject.keywordPlusEXPLANATION-
dc.subject.keywordPlusPREDICTION-
dc.subject.keywordPlusDECISIONS-
dc.subject.keywordPlusIMAGES-
dc.subject.keywordAuthorBlack-box models-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorexplainable artificial intelligence (XAI)-
dc.subject.keywordAuthorInterpretability-
dc.subject.keywordAuthormodel transparency-
dc.subject.keywordAuthorneural networks-
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE