Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Ghost-Free Deep High-Dynamic-Range Imaging Using Focus Pixels for Complex Motion Scenes

Authors
Woo, Sung-MinRyu, Je-HoKim, Jong-Ok
Issue Date
2021
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Image color analysis; Image resolution; Imaging; Dynamic range; Cameras; Lenses; Photonics; Disparity; focus pixel; ghost free imaging; high dynamic range; joint learning; saturation recovery
Citation
IEEE TRANSACTIONS ON IMAGE PROCESSING, v.30, pp.5001 - 5016
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume
30
Start Page
5001
End Page
5016
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/130259
DOI
10.1109/TIP.2021.3077137
ISSN
1057-7149
Abstract
Multi-exposure image fusion inevitably causes ghost artifacts owing to inaccurate image registration. In this study, we propose a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor. For auto-focusing in mobile cameras, a focus-pixel sensor originally provides left (L) and right (R) luminance images simultaneously with a full-resolution RGB image. These L/R images are less saturated than the RGB images because they are summed up to be a normal pixel value in the RGB image of the focus pixel sensor. These two features of the focus pixel image, namely, relatively short exposure and perfect alignment are utilized in this study to provide fusion cues for high dynamic range (HDR) imaging. To minimize fusion artifacts, luminance and chrominance fusions are performed separately in two sub-nets. In a luminance recovery network, two heterogeneous images, the focus pixel image and the corresponding overexposed LDR image, are first fused by joint learning to produce an HDR luminance image. Subsequently, a chrominance network fuses the color components of the misaligned underexposed LDR input to obtain a 3-channel HDR image. Existing deep-neural-network-based HDR fusion methods fuse misaligned multi-exposed inputs directly. They suffer from visual artifacts that are observed mostly in saturated regions because pixel values are clipped out. Meanwhile, the proposed method reconstructs missing luminance with aligned unsaturated focus pixel image first, and thus, the luma-recovered image provides the cues for accurate color fusion. The experimental results show that the proposed method not only accurately restores fine details in saturated areas, but also produce ghost-free high-quality HDR images without pre-alignment.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jong ok photo

Kim, Jong ok
공과대학 (전기전자공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE