Pose-Guided Graph Convolutional Networks for Skeleton-Based Action Recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Han | - |
dc.contributor.author | Jiang, Yifan | - |
dc.contributor.author | Ko, Hanseok | - |
dc.date.accessioned | 2022-11-20T09:40:47Z | - |
dc.date.available | 2022-11-20T09:40:47Z | - |
dc.date.created | 2022-11-17 | - |
dc.date.issued | 2022 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/146099 | - |
dc.description.abstract | Graph convolutional networks (GCN), which can model the human body skeletons as spatial and temporal graphs, have shown remarkable potential in skeleton-based action recognition. However, in the existing GCN-based methods, graph-structured representation of the human skeleton makes it difficult to be fused with other modalities, especially in the early stages. This may limit their scalability and performance in action recognition tasks. In addition, the pose information, which naturally contains informative and discriminative clues for action recognition, is rarely explored together with skeleton data in existing methods. In this work, we proposed pose-guided GCN (PG-GCN), a multi-modal framework for high-performance human action recognition. In particular, a multi-stream network is constructed to simultaneously explore the robust features from both the pose and skeleton data, while a dynamic attention module is designed for early-stage feature fusion. The core idea of this module is to utilize a trainable graph to aggregate features from the skeleton stream with that of the pose stream, which leads to a network with more robust feature representation ability. Extensive experiments show that the proposed PG-GCN can achieve state-of-the-art performance on the NTU RGB+D 60 and NTU RGB+D 120 datasets. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Pose-Guided Graph Convolutional Networks for Skeleton-Based Action Recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Ko, Hanseok | - |
dc.identifier.doi | 10.1109/ACCESS.2022.3214812 | - |
dc.identifier.scopusid | 2-s2.0-85140792482 | - |
dc.identifier.wosid | 000875652300001 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.10, pp.111725 - 111731 | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 10 | - |
dc.citation.startPage | 111725 | - |
dc.citation.endPage | 111731 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordAuthor | Convolutional neural networks | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Three-dimensional displays | - |
dc.subject.keywordAuthor | Correlation | - |
dc.subject.keywordAuthor | Pose estimation | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Data models | - |
dc.subject.keywordAuthor | Skeletons | - |
dc.subject.keywordAuthor | Action recognition | - |
dc.subject.keywordAuthor | attention mechanism | - |
dc.subject.keywordAuthor | feature fusion | - |
dc.subject.keywordAuthor | graph convolutional networks | - |
dc.subject.keywordAuthor | human skeleton | - |
dc.subject.keywordAuthor | pose information | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.