Pyramidal Semantic Correspondence Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jeon, S. | - |
dc.contributor.author | Kim, S. | - |
dc.contributor.author | Min, D. | - |
dc.contributor.author | Sohn, K. | - |
dc.date.accessioned | 2022-03-10T04:40:38Z | - |
dc.date.available | 2022-03-10T04:40:38Z | - |
dc.date.created | 2022-02-09 | - |
dc.date.issued | 2022-12 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/138426 | - |
dc.description.abstract | This paper presents a deep architecture, called pyramidal semantic correspondence networks (PSCNet), that estimates locally-varying affine transformation fields across semantically similar images. To deal with large appearance and shape variations that commonly exist among different instances within the same object category, we leverage a pyramidal model where the affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed. Different from the previous methods which directly estimate global or local deformations, our method first starts to estimate the transformation from an entire image and then progressively increases the degree of freedom of the transformation by dividing coarse cell into finer ones. To this end, we propose two spatial pyramid models by dividing an image in a form of quad-tree rectangles or into multiple semantic elements of an object. Additionally, to overcome the limitation of insufficient training data, a novel weakly-supervised training scheme is introduced that generates progressively evolving supervisions through the spatial pyramid models by leveraging a correspondence consistency across image pairs. Extensive experimental results on various benchmarks including TSS, Proposal Flow-WILLOW, Proposal Flow-PASCAL, Caltech-101, and SPair-71k demonstrate that the proposed method outperforms the lastest methods for dense semantic correspondence. Author | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Pyramidal Semantic Correspondence Networks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, S. | - |
dc.identifier.doi | 10.1109/TPAMI.2021.3123679 | - |
dc.identifier.scopusid | 2-s2.0-85118551202 | - |
dc.identifier.wosid | 000880661400040 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Pattern Analysis and Machine Intelligence, v.44, no.12, pp.9102 - 9118 | - |
dc.relation.isPartOf | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.citation.title | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.citation.volume | 44 | - |
dc.citation.number | 12 | - |
dc.citation.startPage | 9102 | - |
dc.citation.endPage | 9118 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordAuthor | coarse-to-fine inference | - |
dc.subject.keywordAuthor | Computer architecture | - |
dc.subject.keywordAuthor | Dense semantic correspondence | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Microprocessors | - |
dc.subject.keywordAuthor | Proposals | - |
dc.subject.keywordAuthor | Robustness | - |
dc.subject.keywordAuthor | Semantics | - |
dc.subject.keywordAuthor | spatial pyramid model | - |
dc.subject.keywordAuthor | Strain | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.