A novel strategy for differentiating motor imagination brain-computer interface tasks by fusing EEG and functional near-infrared spectroscopy signals.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      • The multi-modal acquisition method based on Electroencephalography and Functional near-infrared spectroscopy can obtain more comprehensive physiological information of motor imagery task. • Feature layer fusion multi-modal information has higher robustness. • A reasonable feature screening method can effectively reduce the feature dimension and improve the classification performance. • More and more researchers pay attention to multimodal motion imagination brain computer interface. The multimodal brain–computer interface (BCI) is an innovative paradigm for human–computer interaction that utilizes both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) signals; therefore, it has received considerable interest. In this study, we concurrently collected EEG and fNIRS data from eighteen healthy participants while they engaged in the mental simulation of performing grasping movements with their left and right hands. During the feature screening session, we assessed the effectiveness of combining the Relief and minimum redundancy maximum relevance (mRMR) algorithms. This algorithm was applied individually to analyze the common spatial pattern (CSP) characteristics of EEG signals across distinct frequency bands as well as the modified CSP (MCSP) attributes of fNIRS signals. Moreover, the enhancement of classification accuracy via feature-level fusion of the two signal types was investigated. The support vector machine (SVM) algorithm was used as the classifier for both training and validation. The results show a significant decrease in the feature count and a substantial enhancement in classification accuracy. Additionally, the highest classification accuracy (88.33 % ± 5.80 % for EEG + HbO + HbR, P < 0.05) was achieved when utilizing multimodal features, which exceeds that when utilizing EEG alone (84.28 % ± 7.56 %). Furthermore, the group of participants yielding an enhanced classification accuracy under the multi-modal characteristics constituted the highest percentage among all participants in the case of combined EEG and HbR (88.89 %). The proposed multi-modal information fusion strategy can serve as an effective reference for task recognition in BCI. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of Biomedical Signal Processing & Control is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)