School of Computer and Information Science, Southwest University, Chongqing 400700, China.
Sensors (Basel). 2022 Sep 21;22(19):7144. doi: 10.3390/s22197144.
We propose a novel pose estimation method that can predict the full-body pose from six inertial sensors worn by the user. This method solves problems encountered in vision, such as occlusion or expensive deployment. We address several complex challenges. First, we use the SRU network structure instead of the bidirectional RNN structure used in previous work to reduce the computational effort of the model without losing its accuracy. Second, our model does not require joint position supervision to achieve the best results of the previous work. Finally, since sensor data tend to be noisy, we use SmoothLoss to reduce the impact of inertial sensors on pose estimation. The faster deep inertial poser model proposed in this paper can perform online inference at 90 FPS on the CPU. We reduce the impact of each error by more than 10% and increased the inference speed by 250% compared to the previous state of the art.
我们提出了一种新的姿态估计方法,该方法可以从用户佩戴的六个惯性传感器预测全身姿态。该方法解决了视觉中遇到的问题,例如遮挡或昂贵的部署。我们解决了几个复杂的挑战。首先,我们使用 SRU 网络结构代替了之前工作中使用的双向 RNN 结构,在不降低模型准确性的情况下减少了模型的计算量。其次,我们的模型不需要关节位置监督即可达到之前工作的最佳结果。最后,由于传感器数据往往存在噪声,我们使用 SmoothLoss 来减少惯性传感器对姿态估计的影响。本文提出的更快的深度惯性定位器模型可以在 CPU 上以 90 FPS 的速度进行在线推理。与之前的最新状态相比,我们将每个误差的影响降低了 10%以上,并将推理速度提高了 250%。