View-independent human action recognition with Volume Motion Template on single stereo camera
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Roh, Myung-Cheol | - |
dc.contributor.author | Shin, Ho-Keun | - |
dc.contributor.author | Lee, Seong-Whan | - |
dc.date.accessioned | 2021-09-08T03:11:28Z | - |
dc.date.available | 2021-09-08T03:11:28Z | - |
dc.date.created | 2021-06-11 | - |
dc.date.issued | 2010-05-01 | - |
dc.identifier.issn | 0167-8655 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/116468 | - |
dc.description.abstract | Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template (VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition. (C) 2009 Elsevier B.V. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER | - |
dc.subject | TIME GESTURE RECOGNITION | - |
dc.subject | SEGMENTATION | - |
dc.title | View-independent human action recognition with Volume Motion Template on single stereo camera | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Seong-Whan | - |
dc.identifier.doi | 10.1016/j.patrec.2009.11.017 | - |
dc.identifier.scopusid | 2-s2.0-77649337452 | - |
dc.identifier.wosid | 000276700500013 | - |
dc.identifier.bibliographicCitation | PATTERN RECOGNITION LETTERS, v.31, no.7, pp.639 - 647 | - |
dc.relation.isPartOf | PATTERN RECOGNITION LETTERS | - |
dc.citation.title | PATTERN RECOGNITION LETTERS | - |
dc.citation.volume | 31 | - |
dc.citation.number | 7 | - |
dc.citation.startPage | 639 | - |
dc.citation.endPage | 647 | - |
dc.type.rims | ART | - |
dc.type.docType | Article; Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.subject.keywordPlus | TIME GESTURE RECOGNITION | - |
dc.subject.keywordPlus | SEGMENTATION | - |
dc.subject.keywordAuthor | View-independence | - |
dc.subject.keywordAuthor | Human action recognition | - |
dc.subject.keywordAuthor | Volume Motion Template | - |
dc.subject.keywordAuthor | Motion History Image | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.