Towards an Interpretable Deep Driving Network by Attentional Bottleneck
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Jinkyu | - |
dc.contributor.author | Bansal, Mayank | - |
dc.date.accessioned | 2022-11-04T10:41:48Z | - |
dc.date.available | 2022-11-04T10:41:48Z | - |
dc.date.created | 2022-11-04 | - |
dc.date.issued | 2021-10 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/144642 | - |
dc.description.abstract | Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars. One of their main drawbacks is a lack of transparency: they should provide easy to interpret rationales for what triggers certain behaviors. We propose an architecture called Attentional Bottleneck with the goal of improving transparency. Our key idea is to combine visual attention, which identifies what aspects of the input the model is using, with an information bottleneck that enables the model to only use aspects of the input which are important. This not only provides sparse and interpretable attention maps (e.g. focusing only on specific vehicles in the scene), but it adds this transparency at no cost to model accuracy. In fact, we find improvements in accuracy when applying Attentional Bottleneck to the ChauffeurNet model, whereas we find that the accuracy deteriorates with a traditional visual attention model. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Towards an Interpretable Deep Driving Network by Attentional Bottleneck | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Jinkyu | - |
dc.identifier.doi | 10.1109/LRA.2021.3096495 | - |
dc.identifier.scopusid | 2-s2.0-85110882468 | - |
dc.identifier.wosid | 000681126100007 | - |
dc.identifier.bibliographicCitation | IEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.4, pp.7349 - 7356 | - |
dc.relation.isPartOf | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.citation.title | IEEE ROBOTICS AND AUTOMATION LETTERS | - |
dc.citation.volume | 6 | - |
dc.citation.number | 4 | - |
dc.citation.startPage | 7349 | - |
dc.citation.endPage | 7356 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordAuthor | Explainable AI (XAI) | - |
dc.subject.keywordAuthor | deep driving network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.