Discriminative context learning with gated recurrent unit for group activity recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Pil-Soo | - |
dc.contributor.author | Lee, Dong-Gyu | - |
dc.contributor.author | Lee, Seong-Whan | - |
dc.date.accessioned | 2021-09-02T12:47:16Z | - |
dc.date.available | 2021-09-02T12:47:16Z | - |
dc.date.created | 2021-06-16 | - |
dc.date.issued | 2018-04 | - |
dc.identifier.issn | 0031-3203 | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/76187 | - |
dc.description.abstract | In this study, we address the problem of similar local motions that create confusion within different group activities. To reduce the influences of motions, we propose a discriminative group context feature (DGCF) that considers prominent sub-events. Moreover, we adopt a gated recurrent unit (GRU) model that can learn temporal changes in a sequence. In real-world scenarios, people perform activities with different temporal lengths. The GRU model handles an arbitrary length of data for training with non-linear hidden units in the network. However, when we use a deep neural network model, data scarcity causes overfitting problems. Data augmentation methods for images are ineffective for trajectory data augmentation. Thus, we also propose a method for trajectory augmentation. We evaluate the effectiveness of the proposed method on three datasets. In our experiments on each dataset, we show that the proposed method outperforms the competing state-of-the-art methods for group activity recognition. (C) 2017 Elsevier Ltd. All rights reserved. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER SCI LTD | - |
dc.subject | MODEL | - |
dc.title | Discriminative context learning with gated recurrent unit for group activity recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Seong-Whan | - |
dc.identifier.doi | 10.1016/j.patcog.2017.10.037 | - |
dc.identifier.scopusid | 2-s2.0-85040311409 | - |
dc.identifier.wosid | 000424853800012 | - |
dc.identifier.bibliographicCitation | PATTERN RECOGNITION, v.76, pp.149 - 161 | - |
dc.relation.isPartOf | PATTERN RECOGNITION | - |
dc.citation.title | PATTERN RECOGNITION | - |
dc.citation.volume | 76 | - |
dc.citation.startPage | 149 | - |
dc.citation.endPage | 161 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordAuthor | Group activity recognition | - |
dc.subject.keywordAuthor | Sequence modeling | - |
dc.subject.keywordAuthor | Recurrent neural network | - |
dc.subject.keywordAuthor | Gated recurrent unit | - |
dc.subject.keywordAuthor | Data augmentation | - |
dc.subject.keywordAuthor | Video surveillance | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(02841) 서울특별시 성북구 안암로 14502-3290-1114
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.