A new approach to training more interpretable model with additional segmentation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shin, Sunguk | - |
dc.contributor.author | Kim, Youngjoon | - |
dc.contributor.author | Yoon, Ji Won | - |
dc.date.accessioned | 2022-02-13T05:40:32Z | - |
dc.date.available | 2022-02-13T05:40:32Z | - |
dc.date.created | 2022-02-09 | - |
dc.date.issued | 2021-12 | - |
dc.identifier.issn | 0167-8655 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/135586 | - |
dc.description.abstract | It is not straightforward to understand how the complicated deep learning models work because they are almost black boxes. To address this problem, various approaches have been developed to provide interpretability and applied in black-box deep learning models. However, the traditional interpretable machine learning only helps us to understand the models which have already been trained. Therefore, if the models are not properly trained, it is obvious that the interpretable machine learning will not work well. We propose a simple but effective method which trains models to improve interpretability for image classification. We also evaluate how well the models focus on appropriate objects, not just relying on classification accuracy. We use Class Activation Mapping (CAM) to train and evaluate the model interpretability. As a result, with VOC PASCAL 2012 datasets, when the ResNet50 model is trained by the proposed approach the 0.5IOU is 29.61%, while the model which is trained only by images and labels is 13.00%. The classification accuracy of the proposed approach is 75.03%, the existing method is 68.38%, and FCN is 60.69%. These evaluations show that the proposed approach is effective. (c) 2021 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER | - |
dc.title | A new approach to training more interpretable model with additional segmentation | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Yoon, Ji Won | - |
dc.identifier.doi | 10.1016/j.patrec.2021.10.003 | - |
dc.identifier.scopusid | 2-s2.0-85117602816 | - |
dc.identifier.wosid | 000711455600014 | - |
dc.identifier.bibliographicCitation | PATTERN RECOGNITION LETTERS, v.152, pp.188 - 194 | - |
dc.relation.isPartOf | PATTERN RECOGNITION LETTERS | - |
dc.citation.title | PATTERN RECOGNITION LETTERS | - |
dc.citation.volume | 152 | - |
dc.citation.startPage | 188 | - |
dc.citation.endPage | 194 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordAuthor | Classification model | - |
dc.subject.keywordAuthor | Convolutional neural networks | - |
dc.subject.keywordAuthor | Interpretable machine learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.