Gesture spotting and recognition for human-robot interaction
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Hee-Deok | - |
dc.contributor.author | Park, A-Yeon | - |
dc.contributor.author | Lee, Seong-Whan | - |
dc.date.accessioned | 2021-09-09T17:23:09Z | - |
dc.date.available | 2021-09-09T17:23:09Z | - |
dc.date.created | 2021-06-10 | - |
dc.date.issued | 2007-04 | - |
dc.identifier.issn | 1552-3098 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/125793 | - |
dc.description.abstract | Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRI). Previous HRI research focused on issues such as hand gestures, sign language, and command gesture recognition. Automatic recognition of whole-body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole-body gestures is a complex task. This paper presents a new method for recognition of whole-body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3-D. A feature vector is then mapped to a codeword of hidden Markov models. In order to spot key gestures accurately, a sophisticated method of designing a transition gesture model is proposed. To reduce the states of the transition gesture model, model reduction which merges similar states based on data-dependent statistics and relative entropy is used. The experimental results demonstrate that the proposed method can be efficient and effective in HRI, for automatic recognition of whole-body key gestures from motion sequences. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Gesture spotting and recognition for human-robot interaction | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Seong-Whan | - |
dc.identifier.doi | 10.1109/TRO.2006.889491 | - |
dc.identifier.scopusid | 2-s2.0-34247223015 | - |
dc.identifier.wosid | 000245904500007 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON ROBOTICS, v.23, no.2, pp.256 - 270 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON ROBOTICS | - |
dc.citation.title | IEEE TRANSACTIONS ON ROBOTICS | - |
dc.citation.volume | 23 | - |
dc.citation.number | 2 | - |
dc.citation.startPage | 256 | - |
dc.citation.endPage | 270 | - |
dc.type.rims | ART | - |
dc.type.docType | Article; Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordAuthor | gesture spotting | - |
dc.subject.keywordAuthor | hidden Markov model (HNM) | - |
dc.subject.keywordAuthor | human-robot interaction (HRI) | - |
dc.subject.keywordAuthor | mobile robot | - |
dc.subject.keywordAuthor | transition gesture model | - |
dc.subject.keywordAuthor | whole-body gesture recognition | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.