Feature-Based Interpretation of the Deep Neural Network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Eun-Hun | - |
dc.contributor.author | Kim, Hyeoncheol | - |
dc.date.accessioned | 2022-02-15T15:41:46Z | - |
dc.date.available | 2022-02-15T15:41:46Z | - |
dc.date.created | 2022-02-08 | - |
dc.date.issued | 2021-11 | - |
dc.identifier.issn | 2079-9292 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/135876 | - |
dc.description.abstract | The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.subject | EXTRACTING RULES | - |
dc.title | Feature-Based Interpretation of the Deep Neural Network | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Hyeoncheol | - |
dc.identifier.doi | 10.3390/electronics10212687 | - |
dc.identifier.scopusid | 2-s2.0-85118353717 | - |
dc.identifier.wosid | 000720252100001 | - |
dc.identifier.bibliographicCitation | ELECTRONICS, v.10, no.21 | - |
dc.relation.isPartOf | ELECTRONICS | - |
dc.citation.title | ELECTRONICS | - |
dc.citation.volume | 10 | - |
dc.citation.number | 21 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.subject.keywordPlus | EXTRACTING RULES | - |
dc.subject.keywordAuthor | explainable artificial intelligence (XAI) | - |
dc.subject.keywordAuthor | interpretability | - |
dc.subject.keywordAuthor | neural network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.