Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Multi-View Attention Network for Visual Dialog

Authors
Park, SungjinWhang, TaesunYoon, YeochanLim, Heuiseok
Issue Date
Apr-2021
Publisher
MDPI
Keywords
visual dialog; attention mechanism; multimodal learning; vision-language
Citation
APPLIED SCIENCES-BASEL, v.11, no.7
Indexed
SCIE
SCOPUS
Journal Title
APPLIED SCIENCES-BASEL
Volume
11
Number
7
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/128333
DOI
10.3390/app11073009
ISSN
2076-3417
Abstract
Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to (1) determine the semantic intent of question and (2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms previous state-of-the-art methods under both single model and ensemble settings.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE