Zhang Xiaodong, Li Hanzhe, Dong Runlin, Lu Zhufeng, Li Cunxin
School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China.
Shaanxi Key Laboratory of Intelligent Robots, Xi'an Jiaotong University, Xi'an, Shaanxi, China.
Front Neurosci. 2022 Sep 23;16:954387. doi: 10.3389/fnins.2022.954387. eCollection 2022.
The electroencephalogram (EEG) and surface electromyogram (sEMG) fusion has been widely used in the detection of human movement intention for human-robot interaction, but the internal relationship of EEG and sEMG signals is not clear, so their fusion still has some shortcomings. A precise fusion method of EEG and sEMG using the CNN-LSTM model was investigated to detect lower limb voluntary movement in this study. At first, the EEG and sEMG signal processing of each stage was analyzed so that the response time difference between EEG and sEMG can be estimated to detect lower limb voluntary movement, and it can be calculated by the symbolic transfer entropy. Second, the data fusion and feature of EEG and sEMG were both used for obtaining a data matrix of the model, and a hybrid CNN-LSTM model was established for the EEG and sEMG-based decoding model of lower limb voluntary movement so that the estimated value of time difference was about 24 ∼ 26 ms, and the calculated value was between 25 and 45 ms. Finally, the offline experimental results showed that the accuracy of data fusion was significantly higher than feature fusion-based accuracy in 5-fold cross-validation, and the average accuracy of EEG and sEMG data fusion was more than 95%; the improved average accuracy for eliminating the response time difference between EEG and sEMG was about 0.7 ± 0.26% in data fusion. In the meantime, the online average accuracy of data fusion-based CNN-LSTM was more than 87% in all subjects. These results demonstrated that the time difference had an influence on the EEG and sEMG fusion to detect lower limb voluntary movement, and the proposed CNN-LSTM model can achieve high performance. This work provides a stable and reliable basis for human-robot interaction of the lower limb exoskeleton.
脑电图(EEG)与表面肌电图(sEMG)融合已广泛应用于人机交互中人体运动意图的检测,但EEG和sEMG信号的内在关系尚不清楚,因此它们的融合仍存在一些不足。本研究探讨了一种基于卷积神经网络-长短期记忆网络(CNN-LSTM)模型的EEG和sEMG精确融合方法,用于检测下肢自主运动。首先,分析了各阶段的EEG和sEMG信号处理过程,以便估计EEG和sEMG之间的响应时间差来检测下肢自主运动,其可通过符号转移熵来计算。其次,将EEG和sEMG的数据融合与特征用于获取模型的数据矩阵,并建立了基于EEG和sEMG的下肢自主运动解码混合CNN-LSTM模型,使得时间差的估计值约为24~26毫秒,计算值在25至45毫秒之间。最后,离线实验结果表明,在五折交叉验证中,数据融合的准确率显著高于基于特征融合的准确率,EEG和sEMG数据融合的平均准确率超过95%;在数据融合中,消除EEG和sEMG之间响应时间差后的平均准确率提高了约0.7±0.26%。同时,基于数据融合的CNN-LSTM在所有受试者中的在线平均准确率超过87%。这些结果表明,时间差对EEG和sEMG融合检测下肢自主运动有影响,所提出的CNN-LSTM模型能够实现高性能。这项工作为下肢外骨骼的人机交互提供了稳定可靠的基础。