B-HMAX: A fast binary biologically inspired model for object recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Hua-Zhen | - |
dc.contributor.author | Lu, Yan-Feng | - |
dc.contributor.author | Kang, Tae-Koo | - |
dc.contributor.author | Lim, Myo-Taeg | - |
dc.date.accessioned | 2021-09-03T15:45:13Z | - |
dc.date.available | 2021-09-03T15:45:13Z | - |
dc.date.created | 2021-06-16 | - |
dc.date.issued | 2016-12-19 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/86522 | - |
dc.description.abstract | The biologically inspired model, Hierarchical Model and X (HMAX), has excellent performance in object categorization. It consists of four layers of computational units based on the mechanisms of the visual cortex. However, the random patch selection method in HMAX often leads to mismatch due to the extraction of redundant information, and the computational cost of recognition is expensive because of the Euclidean distance calculations for similarity in the third layer, S2. To solve these limitations, we propose a fast binary-based HMAX model (B-HMAX). In the proposed method, we detect corner-based interest points after the second layer, C1, to extract few features with better distinctiveness, use binary strings to describe the image patches extracted around detected corners, then use the Hamming distance for matching between two patches in the third layer, S2, which is much faster than Euclidean distance calculations. The experimental results demonstrate that our proposed B-HMAX model can significantly reduce the total process time by almost 80% for an image, while keeping the accuracy performance competitive with the standard HMAX. (C) 2016 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER | - |
dc.subject | LOCAL FEATURES | - |
dc.subject | CLASSIFICATION | - |
dc.subject | TEXTURE | - |
dc.title | B-HMAX: A fast binary biologically inspired model for object recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Myo-Taeg | - |
dc.identifier.doi | 10.1016/j.neucom.2016.08.051 | - |
dc.identifier.scopusid | 2-s2.0-84994130278 | - |
dc.identifier.wosid | 000388053700026 | - |
dc.identifier.bibliographicCitation | NEUROCOMPUTING, v.218, pp.242 - 250 | - |
dc.relation.isPartOf | NEUROCOMPUTING | - |
dc.citation.title | NEUROCOMPUTING | - |
dc.citation.volume | 218 | - |
dc.citation.startPage | 242 | - |
dc.citation.endPage | 250 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | LOCAL FEATURES | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | TEXTURE | - |
dc.subject.keywordAuthor | Object recognition | - |
dc.subject.keywordAuthor | Classification | - |
dc.subject.keywordAuthor | HMAX | - |
dc.subject.keywordAuthor | Binary descriptor | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.