Autonomous Salient Feature Detection through Salient Cues in an HSV Color Space for Visual Indoor Simultaneous Localization and Mapping
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Yong-Ju | - |
dc.contributor.author | Song, Jae-Bok | - |
dc.date.accessioned | 2021-09-08T09:59:40Z | - |
dc.date.available | 2021-09-08T09:59:40Z | - |
dc.date.created | 2021-06-11 | - |
dc.date.issued | 2010 | - |
dc.identifier.issn | 0169-1864 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/118550 | - |
dc.description.abstract | For successful simultaneous localization and mapping (SLAM), perception of the environment is important. This paper proposes a scheme to autonomously detect visual features that can be used as natural landmarks for indoor SLAM. First, features are roughly selected from the camera image through entropy maps that measure the level of randomness of pixel information. Then, the saliency of each pixel is computed by measuring the level of similarity between the selected features and the given image. In the saliency map, it is possible to distinguish the salient features from the background. The robot estimates its pose by using the detected features and builds a grid map of the unknown environment by using a range sensor. The feature positions are stored in the grid map. Experimental results show that the feature detection method proposed in this paper can autonomously detect features in unknown environments reasonably well. (C) Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2010 | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | TAYLOR & FRANCIS LTD | - |
dc.subject | ATTENTION | - |
dc.subject | SCALE | - |
dc.subject | SLAM | - |
dc.title | Autonomous Salient Feature Detection through Salient Cues in an HSV Color Space for Visual Indoor Simultaneous Localization and Mapping | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Song, Jae-Bok | - |
dc.identifier.doi | 10.1163/016918610X512613 | - |
dc.identifier.scopusid | 2-s2.0-77957323059 | - |
dc.identifier.wosid | 000281602500004 | - |
dc.identifier.bibliographicCitation | ADVANCED ROBOTICS, v.24, no.11, pp.1595 - 1613 | - |
dc.relation.isPartOf | ADVANCED ROBOTICS | - |
dc.citation.title | ADVANCED ROBOTICS | - |
dc.citation.volume | 24 | - |
dc.citation.number | 11 | - |
dc.citation.startPage | 1595 | - |
dc.citation.endPage | 1613 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordPlus | ATTENTION | - |
dc.subject.keywordPlus | SCALE | - |
dc.subject.keywordPlus | SLAM | - |
dc.subject.keywordAuthor | Mobile robot | - |
dc.subject.keywordAuthor | salient features | - |
dc.subject.keywordAuthor | SIFT | - |
dc.subject.keywordAuthor | SLAM | - |
dc.subject.keywordAuthor | visual attention | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.