Detailed Information

Cited 31 time in webofscience Cited 36 time in scopus
Metadata Downloads

Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks

Authors
Kwon, O-YeonLee, Min-HoGuan, CuntaiLee, Seong-Whan
Issue Date
Oct-2020
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Electroencephalography; Databases; Feature extraction; Electrodes; Brain modeling; Task analysis; Calibration; Brain-computer interface (BCI); convolutional neural networks (CNNs); deep learning (DL); electroencephalography (EEG); motor imagery (MI); subject-independent
Citation
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, v.31, no.10, pp.3839 - 3852
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume
31
Number
10
Start Page
3839
End Page
3852
URI
https://scholar.korea.ac.kr/handle/2021.sw.korea/52607
DOI
10.1109/TNNLS.2019.2946869
ISSN
2162-237X
Abstract
For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
Files in This Item
There are no files associated with this item.
Appears in
Collections
Graduate School > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Seong Whan photo

Lee, Seong Whan
Department of Artificial Intelligence
Read more

Altmetrics

Total Views & Downloads

BROWSE