Manipal Academy of Higher Education, Manipal 576104, India.
Department of Mechanical Engineering, Polytechnique Montréal, Montreal, QC H3T 1J4, Canada.
Sensors (Basel). 2022 May 11;22(10):3650. doi: 10.3390/s22103650.
Upper limb amputation severely affects the quality of life and the activities of daily living of a person. In the last decade, many robotic hand prostheses have been developed which are controlled by using various sensing technologies such as artificial vision and tactile and surface electromyography (sEMG). If controlled properly, these prostheses can significantly improve the daily life of hand amputees by providing them with more autonomy in physical activities. However, despite the advancements in sensing technologies, as well as excellent mechanical capabilities of the prosthetic devices, their control is often limited and usually requires a long time for training and adaptation of the users. The myoelectric prostheses use signals from residual stump muscles to restore the function of the lost limbs seamlessly. However, the use of the sEMG signals in robotic as a user control signal is very complicated due to the presence of noise, and the need for heavy computational power. In this article, we developed motion intention classifiers for transradial (TR) amputees based on EMG data by implementing various machine learning and deep learning models. We benchmarked the performance of these classifiers based on overall generalization across various classes and we presented a systematic study on the impact of time domain features and pre-processing parameters on the performance of the classification models. Our results showed that Ensemble learning and deep learning algorithms outperformed other classical machine learning algorithms. Investigating the trend of varying sliding window on feature-based and non-feature-based classification model revealed interesting correlation with the level of amputation. The study also covered the analysis of performance of classifiers on amputation conditions since the history of amputation and conditions are different to each amputee. These results are vital for understanding the development of machine learning-based classifiers for assistive robotic applications.
上肢截肢严重影响了一个人的生活质量和日常生活活动能力。在过去的十年中,已经开发出了许多使用各种传感技术(如人工视觉和触觉以及表面肌电图(sEMG))控制的机器人手假肢。如果控制得当,这些假肢可以通过为手部截肢者提供更多的体力活动自主性,显著改善他们的日常生活。然而,尽管传感技术以及假肢设备的出色机械能力取得了进步,但它们的控制通常受到限制,并且通常需要用户进行长时间的培训和适应。肌电假肢使用残肢肌肉的信号来无缝恢复失去的肢体功能。然而,由于存在噪声和对大量计算能力的需求,sEMG 信号在机器人中作为用户控制信号的使用非常复杂。在本文中,我们通过实现各种机器学习和深度学习模型,为桡骨截肢者(TR)基于 EMG 数据开发了运动意图分类器。我们根据在各种类别中的整体泛化来评估这些分类器的性能,并对时域特征和预处理参数对分类模型性能的影响进行了系统研究。我们的结果表明,集成学习和深度学习算法优于其他经典机器学习算法。研究基于特征和非特征的分类模型中滑动窗口的变化趋势与截肢水平之间的有趣相关性。该研究还涵盖了分析分类器在截肢条件下的性能,因为截肢的历史和条件对每个截肢者都是不同的。这些结果对于理解基于机器学习的分类器在辅助机器人应用中的发展至关重要。