Motor Imagery Classification Using Inter-Task Transfer Learning via a Channel-Wise Variational Autoencoder-Based Convolutional Neural Network
- Authors
- Lee, Do-Yeun; Jeong, Ji-Hoon; Lee, Byeong-Hoo; Lee, Seong-Whan
- Issue Date
- 2022
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Brain-computer interface; electroencephalogram; motor imagery; motor execution; deep learning
- Citation
- IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, v.30, pp.226 - 237
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
- Volume
- 30
- Start Page
- 226
- End Page
- 237
- URI
- https://scholar.korea.ac.kr/handle/2021.sw.korea/137604
- DOI
- 10.1109/TNSRE.2022.3143836
- ISSN
- 1534-4320
- Abstract
- Highly sophisticated control based on a brain- computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (+/- 0.04) and 0.69 (+/- 0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.090.27 and 0.080.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - Graduate School > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.