Dominant orientation patch matching for HMAX
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lu, Yan-Feng | - |
dc.contributor.author | Zhang, Hua-Zhen | - |
dc.contributor.author | Kang, Tae-Koo | - |
dc.contributor.author | Lim, Myo-Taeg | - |
dc.date.accessioned | 2021-09-03T22:55:32Z | - |
dc.date.available | 2021-09-03T22:55:32Z | - |
dc.date.created | 2021-06-18 | - |
dc.date.issued | 2016-06-12 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/88344 | - |
dc.description.abstract | The biologically inspired model for object recognition, Hierarchical Model and X (HMAX), has attracted considerable attention in recent years. HMAX is robust (i.e., shift- and scale-invariant), but it is sensitive to rotational deformation, which greatly limits its performance in object recognition. The main reason for this is that HMAX lacks an appropriate directional module against rotational deformation, thereby often leading to mismatch. To address this issue, we propose a novel patch-matching method for HMAX called Dominant Orientation Patch Matching (DOPM), which calculates the dominant orientation of the selected patches and implements patch-to-patch matching. In contrast to patch matching with the whole target image (second layer C1) in the conventional HMAX model, which involves huge amounts of redundant information in the feature representation, the DOPM-based HMAX model (D-HMAX) quantizes the Cl layer to patch sets with better distinctiveness, then realizes patch-to-patch matching based on the dominant orientation. To show the effectiveness of D-HMAX, we apply it to object categorization and conduct experiments on the CalTech101, CalTech05, GRAZ01, and GRAZ02 databases. Our experimental results demonstrate that D-HMAX outperforms conventional HMAX and is comparable to existing architectures that have a similar framework. (C) 2016 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER | - |
dc.subject | OBJECT RECOGNITION | - |
dc.subject | RECEPTIVE-FIELDS | - |
dc.subject | AREA V4 | - |
dc.subject | FEATURES | - |
dc.subject | MODEL | - |
dc.subject | APPEARANCE | - |
dc.subject | NEURONS | - |
dc.subject | MACAQUE | - |
dc.title | Dominant orientation patch matching for HMAX | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Myo-Taeg | - |
dc.identifier.doi | 10.1016/j.neucom.2016.01.069 | - |
dc.identifier.scopusid | 2-s2.0-84959887741 | - |
dc.identifier.wosid | 000375506500017 | - |
dc.identifier.bibliographicCitation | NEUROCOMPUTING, v.193, pp.155 - 166 | - |
dc.relation.isPartOf | NEUROCOMPUTING | - |
dc.citation.title | NEUROCOMPUTING | - |
dc.citation.volume | 193 | - |
dc.citation.startPage | 155 | - |
dc.citation.endPage | 166 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | OBJECT RECOGNITION | - |
dc.subject.keywordPlus | RECEPTIVE-FIELDS | - |
dc.subject.keywordPlus | AREA V4 | - |
dc.subject.keywordPlus | FEATURES | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordPlus | APPEARANCE | - |
dc.subject.keywordPlus | NEURONS | - |
dc.subject.keywordPlus | MACAQUE | - |
dc.subject.keywordAuthor | Object recognition | - |
dc.subject.keywordAuthor | Classification | - |
dc.subject.keywordAuthor | HMAX | - |
dc.subject.keywordAuthor | Dominant orientation | - |
dc.subject.keywordAuthor | Patch | - |
dc.subject.keywordAuthor | Matching | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.