Tryon Jacob, Trejos Ana Luisa
School of Biomedical Engineering, Western University, London, ON, Canada.
Department of Electrical and Computer Engineering, Western University, London, ON, Canada.
Front Neurorobot. 2021 Nov 23;15:692183. doi: 10.3389/fnbot.2021.692183. eCollection 2021.
Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human-machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, made possible through advancements in machine learning technology, is the use of bioelectrical signals, such as electroencephalography (EEG) and electromyography (EMG), to classify the user's actions and intentions. While classification using these signals has been demonstrated for many relevant control tasks, such as motion intention detection and gesture recognition, challenges in decoding the bioelectrical signals have caused researchers to seek methods for improving the accuracy of these models. One such method is the use of EEG-EMG fusion, creating a classification model that decodes information from both EEG and EMG signals simultaneously to increase the amount of available information. So far, EEG-EMG fusion has been implemented using traditional machine learning methods that rely on manual feature extraction; however, new machine learning methods have emerged that can automatically extract relevant information from a dataset, which may prove beneficial during EEG-EMG fusion. In this study, Convolutional Neural Network (CNN) models were developed using combined EEG-EMG inputs to determine if they have potential as a method of EEG-EMG fusion that automatically extracts relevant information from both signals simultaneously. EEG and EMG signals were recorded during elbow flexion-extension and used to develop CNN models based on time-frequency (spectrogram) and time (filtered signal) domain image inputs. The results show a mean accuracy of 80.51 ± 8.07% for a three-class output (33.33% chance level), with an F-score of 80.74%, using time-frequency domain-based models. This work demonstrates the viability of CNNs as a new method of EEG-EMG fusion and evaluates different signal representations to determine the best implementation of a combined EEG-EMG CNN. It leverages modern machine learning methods to advance EEG-EMG fusion, which will ultimately lead to improvements in the usability of wearable robotic exoskeletons.
可穿戴机器人外骨骼已成为一种用于治疗影响行动能力疾病的令人兴奋的新治疗工具;然而,患者用于控制设备的人机界面在机器人辅助和康复能够被广泛采用之前还需要进一步改进。通过机器学习技术的进步而成为可能的一种方法是利用生物电信号,如脑电图(EEG)和肌电图(EMG),来对用户的动作和意图进行分类。虽然使用这些信号进行分类已在许多相关控制任务中得到证明,如运动意图检测和手势识别,但解码生物电信号的挑战促使研究人员寻求提高这些模型准确性的方法。一种这样的方法是使用EEG-EMG融合,创建一个分类模型,该模型同时解码来自EEG和EMG信号的信息,以增加可用信息量。到目前为止,EEG-EMG融合已使用依赖手动特征提取的传统机器学习方法来实现;然而,新的机器学习方法已经出现,它们可以自动从数据集中提取相关信息,这在EEG-EMG融合过程中可能会被证明是有益的。在本研究中,使用组合的EEG-EMG输入开发了卷积神经网络(CNN)模型,以确定它们作为一种EEG-EMG融合方法的潜力,该方法能同时自动从两个信号中提取相关信息。在肘部屈伸过程中记录EEG和EMG信号,并用于基于时频(频谱图)和时域(滤波信号)图像输入开发CNN模型。结果表明,使用基于时频域的模型,对于三类输出(机会水平为33.33%),平均准确率为80.51±8.07%,F分数为80.74%。这项工作证明了CNN作为EEG-EMG融合新方法的可行性,并评估了不同的信号表示方式,以确定组合EEG-EMG CNN的最佳实现方式。它利用现代机器学习方法推进EEG-EMG融合,这最终将导致可穿戴机器人外骨骼可用性的提高。