School of Design Arts and Media, Nanjing University of Science and Technology, Nanjing, Jiangsu, China.
School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, China.
Comput Intell Neurosci. 2021 Aug 26;2021:6591035. doi: 10.1155/2021/6591035. eCollection 2021.
Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.
基于表面肌电信号(sEMG)的手势识别在生物医学和康复工程领域中起着重要作用。最近,使用传感器阵列记录的高密度表面肌电信号(HD-sEMG)进行手势识别方面取得了显著进展。另一方面,使用稀疏放置传感器记录的多通道 sEMG 进行稳健的手势识别仍然是一个主要挑战。在多视图深度学习的背景下,本文提出了一种层次视图池化网络(HVPN)框架,该框架通过学习不仅来自特定视图的深度特征,而且还从分层池化的多视图特征空间中学习视图共享的深度特征,从而提高基于多通道 sEMG 的手势识别。在大规模非侵入性自适应假肢(NinaPro)数据库上进行了广泛的同主体和跨主体评估,以全面评估我们提出的 HVPN 框架。结果表明,当使用 200ms 的滑动窗口来分割数据时,所提出的 HVPN 框架可以在 NinaPro 的前五个子数据库上分别实现 88.4%、85.8%、68.2%、72.9%和 90.3%的同主体手势识别精度,以及 84.9%、82.0%、65.6%、70.2%和 88.9%的跨主体手势识别精度,优于最先进的方法。