IEEE Trans Neural Syst Rehabil Eng. 2024;32:3953-3965. doi: 10.1109/TNSRE.2024.3486444. Epub 2024 Nov 6.
The objective of this study was to propose a novel strategy for detecting upper-limb motion intentions from mechanical sensor signals using deep and heterogeneous transfer learning techniques. Three sensor types, surface electromyography (sEMG), force-sensitive resistors (FSRs), and inertial measurement units (IMUs), were combined to capture biometric signals during arm-up, hold, and arm-down movements. To distinguish motion intentions, deep learning models were constructed using the CIFAR-ResNet18 and CIFAR-MobileNetV2 architectures. The input features of the source models were sEMG, FSR, and IMU signals. The target model was trained using only FSR and IMU sensor signals. Optimization techniques determined appropriate layer structures and learning rates of each layer for effective transfer learning. The source model on CIFAR-ResNet18 exhibited the highest performance, achieving an accuracy of 95% and an F-1 score of 0.95. The target model with optimization strategies performed comparably to the source model, achieving an accuracy of 93% and an F-1 score of 0.93. The results show that mechanical sensors alone can achieve performance comparable to models including sEMG. The proposed approach can serve as a convenient and precise algorithm for human-robot collaboration in rehabilitation assistant robots.
本研究旨在提出一种新策略,使用深度和异构迁移学习技术从机械传感器信号中检测上肢运动意图。三种传感器类型,表面肌电图(sEMG)、力敏电阻(FSR)和惯性测量单元(IMU)被结合起来,在手臂上举、保持和放下运动期间捕捉生物特征信号。为了区分运动意图,使用 CIFAR-ResNet18 和 CIFAR-MobileNetV2 架构构建了深度学习模型。源模型的输入特征是 sEMG、FSR 和 IMU 信号。目标模型仅使用 FSR 和 IMU 传感器信号进行训练。优化技术确定了每个层的适当层结构和学习率,以实现有效的迁移学习。在 CIFAR-ResNet18 上的源模型表现出最高的性能,准确率达到 95%,F1 得分为 0.95。具有优化策略的目标模型与源模型表现相当,准确率达到 93%,F1 得分为 0.93。结果表明,仅使用机械传感器就可以达到包括 sEMG 在内的模型的性能。所提出的方法可以作为康复辅助机器人中人机协作的便捷、精确算法。