Artificial Intelligence and Computer Vision Research Lab, Escuela Politécnica Nacional, Quito 170517, Ecuador.
Sensors (Basel). 2022 Dec 8;22(24):9613. doi: 10.3390/s22249613.
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures-five static and six dynamic-using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals.
基于肌电信号 (EMG) 和惯性测量单元信号 (IMU) 的手势识别系统 (HGR) 近年来在不同的应用中得到了研究。最常见的是,最先进的 HGR 方法基于有监督的机器学习方法。然而,强化学习 (RL) 技术的潜在优势表明,这些技术可能是对 EMG 进行分类的一种可行选择。基于 RL 的方法具有许多优点,例如有希望的分类性能和从经验中在线学习。在这项工作中,我们开发了一个由以下阶段组成的 HGR 系统:预处理、特征提取、分类和后处理。对于分类阶段,我们构建了一个基于 RL 的代理,该代理能够使用基于 EMG 和 IMU 信息的深度 Q 网络 (DQN) 算法学习对 11 种手势进行分类和识别-5 种静态和 6 种动态。所提出的系统使用前馈人工神经网络 (ANN) 表示代理策略。我们使用两种不同类型的传感器进行了相同的实验,以比较它们的性能,这两种传感器是 Myo 臂带传感器和 G-force 传感器。我们使用训练、验证和测试集分布进行了实验,并且针对特定用户的 HGR 模型评估了结果。最终的准确性结果表明,对于静态手势,最佳模型能够达到 97.50%±1.13%和 88.15%±2.84%的分类和识别精度,对于动态手势,最佳模型能够达到 98.95%±0.62%和 90.47%±4.57%的分类和识别精度,使用 Myo 臂带传感器。这项工作的结果表明,RL 方法(如 DQN)能够从在线经验中学习策略,以使用 EMG 和 IMU 信号对静态和动态手势进行分类和识别。