Single Image Deraining Using Time-Lapse Data
- Authors
- Cho, Jaehoon; Kim, Seungryong; Min, Dongbo; Sohn, Kwanghoon
- Issue Date
- 2020
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Rain; Training data; Task analysis; Convolutional neural networks; Rendering (computer graphics); Training; Feature extraction; Single image deraining; convolutional neural networks (CNNs); time-lapse dataset; dynamic fusion module
- Citation
- IEEE TRANSACTIONS ON IMAGE PROCESSING, v.29, pp.7274 - 7289
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE TRANSACTIONS ON IMAGE PROCESSING
- Volume
- 29
- Start Page
- 7274
- End Page
- 7289
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/58923
- DOI
- 10.1109/TIP.2020.3000612
- ISSN
- 1057-7149
- Abstract
- Leveraging on recent advances in deep convolutional neural networks (CNNs), single image deraining has been studied as a learning task, achieving an outstanding performance over traditional hand-designed approaches. Current CNNs based deraining approaches adopt the supervised learning framework that uses a massive training data generated with synthetic rain streaks, having a limited generalization ability on real rainy images. To address this problem, we propose a novel learning framework for single image deraining that leverages time-lapse sequences instead of the synthetic image pairs. The deraining networks are trained using the time-lapse sequences in which both camera and scenes are static except for time-varying rain streaks. Specifically, we formulate a background consistency loss such that the deraining networks consistently generate the same derained images from the time-lapse sequences. We additionally introduce two loss functions, the structure similarity loss that encourages the derained image to be similar with an input rainy image and the directional gradient loss using the assumption that the estimated rain streaks are likely to be sparse and have dominant directions. To consider various rain conditions, we leverage a dynamic fusion module that effectively fuses multi-scale features. We also build a novel large-scale time-lapse dataset providing real world rainy images containing various rain conditions. Experiments demonstrate that the proposed method outperforms state-of-the-art techniques on synthetic and real rainy images both qualitatively and quantitatively. On the high-level vision tasks under severe rainy conditions, it has been shown that the proposed method can be utilized as a pre-preprocessing step for subsequent tasks.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Computer Science and Engineering > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.