College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
School of automation, Guangdong University of Technology, Guangzhou 510006, People's Republic of China.
J Neural Eng. 2023 May 24;20(3). doi: 10.1088/1741-2552/accd22.
The gait phase and joint angle are two essential and complementary components of kinematics during normal walking, whose accurate prediction is critical for lower-limb rehabilitation, such as controlling the exoskeleton robots. Multi-modal signals have been used to promote the prediction performance of the gait phase or joint angle separately, but it is still few reports to examine how these signals can be used to predict both simultaneously.To address this problem, we propose a new method named transferable multi-modal fusion (TMMF) to perform a continuous prediction of knee angles and corresponding gait phases by fusing multi-modal signals. Specifically, TMMF consists of a multi-modal signal fusion block, a time series feature extractor, a regressor, and a classifier. The multi-modal signal fusion block leverages the maximum mean discrepancy to reduce the distribution discrepancy across different modals in the latent space, achieving the goal of transferable multi-modal fusion. Subsequently, by using the long short-term memory-based network, we obtain the feature representation from time series data to predict the knee angles and gait phases simultaneously. To validate our proposal, we design an experimental paradigm with random walking and resting to collect data containing multi-modal biomedical signals from electromyography, gyroscopes, and virtual reality.Comprehensive experiments on our constructed dataset demonstrate the effectiveness of the proposed method. TMMF achieves a root mean square error of0.090±0.022s in knee angle prediction and a precision of83.7±7.7% in gait phase prediction.We demonstrate the feasibility and validity of using TMMF to predict lower-limb kinematics continuously from multi-modal biomedical signals. This proposed method represents application potential in predicting the motor intent of patients with different pathologies.
步态阶段和关节角度是正常行走过程中运动学的两个重要且互补的组成部分,其准确预测对于下肢康复至关重要,例如控制外骨骼机器人。多模态信号已被用于分别促进步态阶段或关节角度的预测性能,但很少有报道研究如何使用这些信号同时预测两者。为了解决这个问题,我们提出了一种名为可迁移多模态融合(TMMF)的新方法,通过融合多模态信号来对膝关节角度和相应的步态阶段进行连续预测。具体来说,TMMF 由多模态信号融合块、时间序列特征提取器、回归器和分类器组成。多模态信号融合块利用最大均值差异来减少潜在空间中不同模态之间的分布差异,实现了可迁移多模态融合的目标。随后,通过使用基于长短期记忆的网络,我们从时间序列数据中获得特征表示,以同时预测膝关节角度和步态阶段。为了验证我们的提议,我们设计了一个随机行走和休息的实验范式,以从肌电图、陀螺仪和虚拟现实中收集包含多模态生物医学信号的数据。在我们构建的数据集上进行的综合实验证明了所提出方法的有效性。TMMF 在膝关节角度预测中实现了 0.090±0.022s 的均方根误差,在步态阶段预测中实现了 83.7±7.7%的精度。我们证明了使用 TMMF 从多模态生物医学信号连续预测下肢运动学的可行性和有效性。该方法在预测不同病理患者的运动意图方面具有应用潜力。