Joint patch clustering-based dictionary learning for multimodal image fusion
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Minjae | - |
dc.contributor.author | Han, David K. | - |
dc.contributor.author | Ko, Hanseok | - |
dc.date.accessioned | 2021-09-04T04:18:30Z | - |
dc.date.available | 2021-09-04T04:18:30Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2016-01 | - |
dc.identifier.issn | 1566-2535 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/89880 | - |
dc.description.abstract | Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task. (C) 2015 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER SCIENCE BV | - |
dc.subject | OF-THE-ART | - |
dc.subject | SPARSE REPRESENTATION | - |
dc.subject | PERFORMANCE | - |
dc.subject | TRANSFORM | - |
dc.subject | INFORMATION | - |
dc.subject | APPROXIMATION | - |
dc.subject | PURSUIT | - |
dc.title | Joint patch clustering-based dictionary learning for multimodal image fusion | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Ko, Hanseok | - |
dc.identifier.doi | 10.1016/j.inffus.2015.03.003 | - |
dc.identifier.scopusid | 2-s2.0-84938200080 | - |
dc.identifier.wosid | 000362145000017 | - |
dc.identifier.bibliographicCitation | INFORMATION FUSION, v.27, pp.198 - 214 | - |
dc.relation.isPartOf | INFORMATION FUSION | - |
dc.citation.title | INFORMATION FUSION | - |
dc.citation.volume | 27 | - |
dc.citation.startPage | 198 | - |
dc.citation.endPage | 214 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordPlus | OF-THE-ART | - |
dc.subject.keywordPlus | SPARSE REPRESENTATION | - |
dc.subject.keywordPlus | PERFORMANCE | - |
dc.subject.keywordPlus | TRANSFORM | - |
dc.subject.keywordPlus | INFORMATION | - |
dc.subject.keywordPlus | APPROXIMATION | - |
dc.subject.keywordPlus | PURSUIT | - |
dc.subject.keywordAuthor | Multimodal image fusion | - |
dc.subject.keywordAuthor | Sparse representation | - |
dc.subject.keywordAuthor | Dictionary learning | - |
dc.subject.keywordAuthor | Clustering | - |
dc.subject.keywordAuthor | K-SVD | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.