IEEE J Biomed Health Inform. 2023 Nov;27(11):5345-5356. doi: 10.1109/JBHI.2023.3311448. Epub 2023 Nov 7.
Reconstructing and predicting 3D human walking poses in unconstrained measurement environments have the potential to use for health monitoring systems for people with movement disabilities by assessing progression after treatments and providing information for assistive device controls. The latest pose estimation algorithms utilize motion capture systems, which capture data from IMU sensors and third-person view cameras. However, third-person views are not always possible for outpatients alone. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras, which aids clinicians' diagnoses on patients out of clinics. To solve this problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the 3D human pose over time and predict the 3D human poses at the following time steps. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on the new lower-limb WearableMotionCapture dataset, and it also outperforms the state-of-the-art methods on two public full-body pose datasets: DIP-IMU and TotalCaputre.
在不受约束的测量环境中重建和预测 3D 人体行走姿势,有可能通过评估治疗后的进展情况,并为辅助设备控制提供信息,用于运动障碍患者的健康监测系统。最新的姿势估计算法利用运动捕捉系统,该系统从 IMU 传感器和第三人称视角摄像机中获取数据。然而,对于单独的门诊患者,第三人称视角并不总是可行的。因此,我们提出了从可穿戴 IMU 传感器和可穿戴摄像机中重建和预测 3D 人体姿势的可穿戴运动捕捉问题,这有助于临床医生对门诊外的患者进行诊断。为了解决这个问题,我们引入了一种新颖的基于注意力的循环神经网络(AttRNet),它包含一个基于传感器的注意力导向循环编码器、一个重建模块和一个动态时间注意力导向循环解码器,以随时间重建 3D 人体姿势并预测后续时间步的 3D 人体姿势。为了评估我们的方法,我们使用可穿戴 IMU 和可穿戴摄像机以及肌肉骨骼关节角度的地面实况收集了一个新的可穿戴运动捕捉数据集。所提出的 AttRNet 在新的下肢可穿戴运动捕捉数据集上表现出很高的准确性,并且在两个公共全身姿势数据集 DIP-IMU 和 TotalCaputre 上也优于最新方法。