Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
Comput Biol Med. 2023 Nov;166:107497. doi: 10.1016/j.compbiomed.2023.107497. Epub 2023 Sep 18.
Deep learning methods have been widely used for the classification of hand gestures using sEMG signals. Existing deep learning architectures only captures local spatial information and has limitations in extracting global temporal dependency to enhance the model's performance. In this paper, we propose a Global and Local Feature fused CNN (GLF-CNN) model that extracts features both globally and locally from sEMG signals to enhance the performance of hand gestures classification. The model contains two independent branches extracting local and global features each and fuses them to learn more diversified features and effectively improve the stability of gesture recognition. Besides, it also exhibits lower computational cost compared to the present approaches. We conduct experiments on five benchmark databases, including the NinaPro DB4, NinaPro DB5, BioPatRec DB1-DB3, and the Mendeley Data. The proposed model achieved the highest average accuracy of 88.34% on these databases, with a 9.96% average accuracy improvement and a 50% reduction in variance compared to the models with the same number of parameters. Moreover, the classification accuracies for the BioPatRec DB1, BioPatRec DB3 and Mendeley Data are 91.4%, 91.0% and 88.6% respectively, corresponding to an improvement of 13.2%, 41.5% and 12.2% over the respective state-of-the-art models. The experimental results demonstrate that the proposed model effectively enhances robustness, with improved gesture recognition performance and generalization ability. It contributes a new way for prosthetic control and human-machine interaction.
深度学习方法已被广泛应用于使用表面肌电信号进行手势分类。现有的深度学习架构仅捕获局部空间信息,在提取全局时间依赖性以提高模型性能方面存在局限性。在本文中,我们提出了一种全局和局部特征融合卷积神经网络(GLF-CNN)模型,该模型从表面肌电信号中全局和局部提取特征,以提高手势分类的性能。该模型包含两个独立的分支,分别提取局部和全局特征,并将它们融合以学习更多多样化的特征,并有效提高手势识别的稳定性。此外,与现有方法相比,它还具有较低的计算成本。我们在五个基准数据库上进行了实验,包括 NinaPro DB4、NinaPro DB5、BioPatRec DB1-DB3 和 Mendeley Data。在这些数据库上,所提出的模型实现了最高平均准确率 88.34%,与具有相同参数数量的模型相比,平均准确率提高了 9.96%,方差降低了 50%。此外,BioPatRec DB1、BioPatRec DB3 和 Mendeley Data 的分类准确率分别为 91.4%、91.0%和 88.6%,相对于各自的最新模型,分别提高了 13.2%、41.5%和 12.2%。实验结果表明,所提出的模型有效地增强了鲁棒性,提高了手势识别性能和泛化能力。它为假肢控制和人机交互提供了一种新方法。