Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Multimodal Deep Fusion Network for Visibility Assessment With a Small Training Dataset

Full metadata record
DC Field Value Language
dc.contributor.authorWang, Han-
dc.contributor.authorShen, Kecheng-
dc.contributor.authorYu, Peilun-
dc.contributor.authorShi, Quan-
dc.contributor.authorKo, Hanseok-
dc.date.accessioned2021-08-31T16:11:48Z-
dc.date.available2021-08-31T16:11:48Z-
dc.date.created2021-06-18-
dc.date.issued2020-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholar.korea.ac.kr/handle/2021.sw.korea/59042-
dc.description.abstractVisibility is a measure of the transparency of the atmosphere, which is an important factor for road, air, and water transportation safety. Recently, features extracted from convolutional neural networks (CNNs) have obtained state-of-the-art results for the estimation of the visibility range for images of foggy weather. However, existing CNN-based approaches have only adopted visible images as observational data. Unlike these previous studies, in this paper, visible-infrared image pairs are used to estimate the visibility range. A novel multimodal deep fusion architecture based on a CNN is then proposed to learn the robust joint features of the two sensor modalities. Our network architecture is composed of two integrated residual network processing streams and one CNN stream, which are connected in parallel. In addition, we construct a visible-infrared multimodal dataset for various fog densities and label the visibility range. We then compare our proposed method with conventional deep-learning-based approaches and analyze the contributions of various observational and classical deep fusion models to the classification of the visibility range. The experimental results demonstrate that both accuracy and robustness can be strongly enhanced using the proposed method, especially for small training datasets.-
dc.languageEnglish-
dc.language.isoen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectATMOSPHERIC VISIBILITY-
dc.subjectFOG-
dc.titleMultimodal Deep Fusion Network for Visibility Assessment With a Small Training Dataset-
dc.typeArticle-
dc.contributor.affiliatedAuthorKo, Hanseok-
dc.identifier.doi10.1109/ACCESS.2020.3031283-
dc.identifier.scopusid2-s2.0-85098072459-
dc.identifier.wosid000597193400001-
dc.identifier.bibliographicCitationIEEE ACCESS, v.8, pp.217057 - 217067-
dc.relation.isPartOfIEEE ACCESS-
dc.citation.titleIEEE ACCESS-
dc.citation.volume8-
dc.citation.startPage217057-
dc.citation.endPage217067-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.subject.keywordPlusATMOSPHERIC VISIBILITY-
dc.subject.keywordPlusFOG-
dc.subject.keywordAuthorAtmospheric modeling-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorEstimation-
dc.subject.keywordAuthorCameras-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorImage resolution-
dc.subject.keywordAuthorVisibility range classification-
dc.subject.keywordAuthormultimodal fusion network-
dc.subject.keywordAuthorvisible&amp-
dc.subject.keywordAuthor#8211-
dc.subject.keywordAuthorinfrared image pairs-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Ko, Han seok photo

Ko, Han seok
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE