Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Towards an Interpretable Deep Driving Network by Attentional Bottleneck

Authors
Kim, JinkyuBansal, Mayank
Issue Date
10월-2021
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Explainable AI (XAI); deep driving network
Citation
IEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.4, pp.7349 - 7356
Indexed
SCIE
SCOPUS
Journal Title
IEEE ROBOTICS AND AUTOMATION LETTERS
Volume
6
Number
4
Start Page
7349
End Page
7356
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/144642
DOI
10.1109/LRA.2021.3096495
ISSN
2377-3766
Abstract
Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars. One of their main drawbacks is a lack of transparency: they should provide easy to interpret rationales for what triggers certain behaviors. We propose an architecture called Attentional Bottleneck with the goal of improving transparency. Our key idea is to combine visual attention, which identifies what aspects of the input the model is using, with an information bottleneck that enables the model to only use aspects of the input which are important. This not only provides sparse and interpretable attention maps (e.g. focusing only on specific vehicles in the scene), but it adds this transparency at no cost to model accuracy. In fact, we find improvements in accuracy when applying Attentional Bottleneck to the ChauffeurNet model, whereas we find that the accuracy deteriorates with a traditional visual attention model.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE