Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, O-Yeon | - |
dc.contributor.author | Lee, Min-Ho | - |
dc.contributor.author | Guan, Cuntai | - |
dc.contributor.author | Lee, Seong-Whan | - |
dc.date.accessioned | 2021-08-30T12:55:56Z | - |
dc.date.available | 2021-08-30T12:55:56Z | - |
dc.date.created | 2021-06-19 | - |
dc.date.issued | 2020-10 | - |
dc.identifier.issn | 2162-237X | - |
dc.identifier.uri | https://scholar.korea.ac.kr/handle/2021.sw.korea/52607 | - |
dc.description.abstract | For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)]. | - |
dc.language | English | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | SINGLE-TRIAL EEG | - |
dc.subject | MOTOR IMAGERY | - |
dc.subject | CLASSIFICATION | - |
dc.subject | PERFORMANCE | - |
dc.subject | PATTERNS | - |
dc.subject | BCI | - |
dc.title | Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Seong-Whan | - |
dc.identifier.doi | 10.1109/TNNLS.2019.2946869 | - |
dc.identifier.scopusid | 2-s2.0-85092679904 | - |
dc.identifier.wosid | 000576436600006 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.31, no.10, pp.3839 - 3852 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS | - |
dc.citation.title | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS | - |
dc.citation.volume | 31 | - |
dc.citation.number | 10 | - |
dc.citation.startPage | 3839 | - |
dc.citation.endPage | 3852 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Hardware & Architecture | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | SINGLE-TRIAL EEG | - |
dc.subject.keywordPlus | MOTOR IMAGERY | - |
dc.subject.keywordPlus | CLASSIFICATION | - |
dc.subject.keywordPlus | PERFORMANCE | - |
dc.subject.keywordPlus | PATTERNS | - |
dc.subject.keywordPlus | BCI | - |
dc.subject.keywordAuthor | Electroencephalography | - |
dc.subject.keywordAuthor | Databases | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Electrodes | - |
dc.subject.keywordAuthor | Brain modeling | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Calibration | - |
dc.subject.keywordAuthor | Brain-computer interface (BCI) | - |
dc.subject.keywordAuthor | convolutional neural networks (CNNs) | - |
dc.subject.keywordAuthor | deep learning (DL) | - |
dc.subject.keywordAuthor | electroencephalography (EEG) | - |
dc.subject.keywordAuthor | motor imagery (MI) | - |
dc.subject.keywordAuthor | subject-independent | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea+82-2-3290-2963
COPYRIGHT © 2021 Korea University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.