Research on Decoding of Neural Network Algorithm for Upper Limb Rehabilitation Robot Based on Motor Imagery

Authors

  • Zhenkun Tian Author

DOI:

https://doi.org/10.61173/zkk51s84

Keywords:

Brain-Computer Interfaces (BCIs), Motor Imagery, Convolutional Neural Network, Transformer

Abstract

Brain-computer interfaces (BCIs) enable direct communication with computers by encoding and decoding brain electrical signals to generate control signals and interact directly with external devices, thereby assisting patients with damaged motor nerves. Research on brain electrical signals based on motor imagery has always received widespread attention. To effectively extract and classify complicated brain electrical signal features, a system architecture that can withstand a low ratio between signal and noise, instability, and physiological artifacts must be constructed. Advancements in motor imagery electroencephalography have been made possible by the rapid development of deep learning over the past few years. This paper aims to introduce and analyze three decoding models that use convolutional neural network architectures (CNN) and three combinations of CNN and Transformer architectures. Through similar comparisons and cross-comparisons, by analyzing the accuracy of each model on the current mainstream motor imagery-related datasets BCI IV 2A and 2B, the advantages and disadvantages of each model are explored. Through comparison, it can be seen that models based on the CNN architecture still occupy a dominant position due to their fewer parameters and better adaptability to various situations. However, the confusion decoding model CTNet performs very well on the BCI IV 2A and 2B, indicating that the decoding model with the Transformer architecture has higher performance development potential and is the main direction of future development.

Downloads

Published

2025-08-26

Issue

Section

Articles