Deep Gradual Multi-Exposure Fusion Via Recurrent Convolutional Network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ryu, Je-Ho | - |
dc.contributor.author | Kim, Jong-Han | - |
dc.contributor.author | Kim, Jong-Ok | - |
dc.date.accessioned | 2022-03-12T06:41:16Z | - |
dc.date.available | 2022-03-12T06:41:16Z | - |
dc.date.created | 2022-01-20 | - |
dc.date.issued | 2021 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/138698 | - |
dc.description.abstract | The performance of multi-exposure image fusion (MEF) has been recently improved with deep learning techniques but there are still a couple of problems to be overcome. In this paper, we propose a novel MEF network based on recurrent neural network (RNN). Multi-exposure images have different useful information depending on their exposure levels, and in order to fuse them complementarily, we first extract the local detail and global context features of input source images, and both features are separately combined. A weight map is learned from the local features for effectively fusing according to the importance of each source image. Adopting RNN as a backbone network enables gradual fusion, where more inputs result in further improvement of the fusion gradually. Also, information can be transferred to the deeper level of the network. Experimental results show that the proposed method achieves the reduction of fusion artifacts and improves detail restoration performance, compared to conventional methods. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | FOCUS IMAGE FUSION | - |
dc.title | Deep Gradual Multi-Exposure Fusion Via Recurrent Convolutional Network | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Jong-Ok | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3122540 | - |
dc.identifier.scopusid | 2-s2.0-85118593445 | - |
dc.identifier.wosid | 000712556300001 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.9, pp.144756 - 144767 | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 9 | - |
dc.citation.startPage | 144756 | - |
dc.citation.endPage | 144767 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | FOCUS IMAGE FUSION | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Image fusion | - |
dc.subject.keywordAuthor | Fuses | - |
dc.subject.keywordAuthor | Image restoration | - |
dc.subject.keywordAuthor | Image reconstruction | - |
dc.subject.keywordAuthor | Brightness | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Multi-exposure image fusion | - |
dc.subject.keywordAuthor | recurrent convolutional network | - |
dc.subject.keywordAuthor | dilated convolution filter | - |
dc.subject.keywordAuthor | gradual fusion | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea+82-2-3290-2963
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.