Dense Cross-Modal Correspondence Estimation With the Deep Self-Correlation Descriptor
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Seungryong | - |
dc.contributor.author | Min, Dongbo | - |
dc.contributor.author | Lin, Stephen | - |
dc.contributor.author | Sohn, Kwanghoon | - |
dc.date.accessioned | 2021-11-17T08:40:31Z | - |
dc.date.available | 2021-11-17T08:40:31Z | - |
dc.date.created | 2021-08-30 | - |
dc.date.issued | 2021-07-01 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/127733 | - |
dc.description.abstract | We present the deep self-correlation (DSC) descriptor for establishing dense correspondences between images taken under different imaging modalities, such as different spectral ranges or lighting conditions. We encode local self-similar structure in a pyramidal manner that yields both more precise localization ability and greater robustness to non-rigid image deformations. Specifically, DSC first computes multiple self-correlation surfaces with randomly sampled patches over a local support window, and then builds pyramidal self-correlation surfaces through average pooling on the surfaces. The feature responses on the self-correlation surfaces are then encoded through spatial pyramid pooling in a log-polar configuration. To better handle geometric variations such as scale and rotation, we additionally propose the geometry-invariant DSC (GI-DSC) that leverages multi-scale self-correlation computation and canonical orientation estimation. In contrast to descriptors based on deep convolutional neural networks (CNNs), DSC and GI-DSC are training-free (i.e., handcrafted descriptors), are robust to cross-modality, and generalize well to various modality variations. Extensive experiments demonstrate the state-of-the-art performance of DSC and GI-DSC on challenging cases of cross-modal image pairs having photometric and/or geometric variations. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE COMPUTER SOC | - |
dc.subject | REGISTRATION | - |
dc.subject | IMAGES | - |
dc.title | Dense Cross-Modal Correspondence Estimation With the Deep Self-Correlation Descriptor | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Seungryong | - |
dc.identifier.doi | 10.1109/TPAMI.2020.2965528 | - |
dc.identifier.scopusid | 2-s2.0-85108022643 | - |
dc.identifier.wosid | 000659549700013 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.43, no.7, pp.2345 - 2359 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.citation.title | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.citation.volume | 43 | - |
dc.citation.number | 7 | - |
dc.citation.startPage | 2345 | - |
dc.citation.endPage | 2359 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | REGISTRATION | - |
dc.subject.keywordPlus | IMAGES | - |
dc.subject.keywordAuthor | Cross-modal correspondence | - |
dc.subject.keywordAuthor | pyramidal structure | - |
dc.subject.keywordAuthor | self-correlation | - |
dc.subject.keywordAuthor | local self-similarity | - |
dc.subject.keywordAuthor | non-rigid deformation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.