• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过托比特卡尔曼滤波器辅助自动编码器实现人体运动增强

Human Motion Enhancement via Tobit Kalman Filter-Assisted Autoencoder.

作者信息

Lannan Nate, Zhou L E, Fan Guoliang

机构信息

School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA.

出版信息

IEEE Access. 2022;10:29233-29251. doi: 10.1109/access.2022.3157605. Epub 2022 Mar 8.

DOI:10.1109/access.2022.3157605
PMID:36090467
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9455937/
Abstract

We present a novel approach to enhance the quality of human motion data collected by low-cost depth sensors, namely D-Mocap, which suffers from low accuracy and poor stability due to occlusion, interference, and algorithmic limitations. Our approach takes advantage of a large set of high-quality and diverse Mocap data by learning a general motion manifold via the convolutional autoencoder. In addition, the Tobit Kalman filter (TKF) is used to capture the kinematics of each body joint and handle censored measurement distribution. The TKF is incorporated with the autoencoder via latent space optimization, maintaining adherence to the motion manifold while preserving the kinematic nature of the original motion data. Furthermore, due to the lack of an open source benchmark dataset for this research, we have developed an extension of the Berkeley Multimodal Human Action Database (MHAD) by generating D-Mocap data from RGB-D images. The newly extended MHAD dataset is skeleton-matched and time-synced to the corresponding Mocap data and is publicly available. Along with simulated D-Mocap data generated from the CMU Mocap dataset and our self-collected D-Mocap dataset, the proposed algorithm is thoroughly evaluated and compared with different settings. Experimental results show that our approach can improve the accuracy of joint positions and angles as well as skeletal bone lengths by over 50%.

摘要

我们提出了一种新颖的方法来提高由低成本深度传感器收集的人体运动数据的质量,即D-Mocap,由于遮挡、干扰和算法限制,该传感器存在精度低和稳定性差的问题。我们的方法通过卷积自动编码器学习通用运动流形,利用大量高质量且多样的运动捕捉(Mocap)数据。此外,托比特卡尔曼滤波器(TKF)用于捕捉每个身体关节的运动学并处理删失测量分布。TKF通过潜在空间优化与自动编码器相结合,在保持原始运动数据运动学特性的同时,保持对运动流形的遵循。此外,由于本研究缺乏开源基准数据集,我们通过从RGB-D图像生成D-Mocap数据,开发了伯克利多模态人类动作数据库(MHAD)的扩展版本。新扩展的MHAD数据集与相应的Mocap数据进行了骨骼匹配和时间同步,并且是公开可用的。连同从CMU Mocap数据集生成的模拟D-Mocap数据以及我们自己收集的D-Mocap数据集,对所提出的算法进行了全面评估,并在不同设置下进行了比较。实验结果表明,我们的方法可以将关节位置、角度以及骨骼长度的精度提高50%以上。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/5a102bd8f58f/nihms-1791233-f0024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/ac96f83d306d/nihms-1791233-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/1176bee31441/nihms-1791233-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/4c2deea56895/nihms-1791233-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/fcd4eb0e2fe0/nihms-1791233-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/94d2ff43bd70/nihms-1791233-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/de8f4f47f079/nihms-1791233-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/c6d874432924/nihms-1791233-f0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/ef11cf8d467c/nihms-1791233-f0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/352905ec6551/nihms-1791233-f0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/db4af115da8c/nihms-1791233-f0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/75878cd1c2de/nihms-1791233-f0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/4a0b86b690e9/nihms-1791233-f0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/08defe75914f/nihms-1791233-f0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/9439201a93c2/nihms-1791233-f0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/373f46986e77/nihms-1791233-f0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/43d615caf5dd/nihms-1791233-f0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/3f31f027ac95/nihms-1791233-f0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/e66b0dcc4c9c/nihms-1791233-f0021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/10ed3da68c68/nihms-1791233-f0022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/8606ad5d9ab4/nihms-1791233-f0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/5a102bd8f58f/nihms-1791233-f0024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/ac96f83d306d/nihms-1791233-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/1176bee31441/nihms-1791233-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/4c2deea56895/nihms-1791233-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/fcd4eb0e2fe0/nihms-1791233-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/94d2ff43bd70/nihms-1791233-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/de8f4f47f079/nihms-1791233-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/c6d874432924/nihms-1791233-f0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/ef11cf8d467c/nihms-1791233-f0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/352905ec6551/nihms-1791233-f0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/db4af115da8c/nihms-1791233-f0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/75878cd1c2de/nihms-1791233-f0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/4a0b86b690e9/nihms-1791233-f0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/08defe75914f/nihms-1791233-f0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/9439201a93c2/nihms-1791233-f0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/373f46986e77/nihms-1791233-f0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/43d615caf5dd/nihms-1791233-f0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/3f31f027ac95/nihms-1791233-f0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/e66b0dcc4c9c/nihms-1791233-f0021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/10ed3da68c68/nihms-1791233-f0022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/8606ad5d9ab4/nihms-1791233-f0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4114/9455937/5a102bd8f58f/nihms-1791233-f0024.jpg

相似文献

1
Human Motion Enhancement via Tobit Kalman Filter-Assisted Autoencoder.通过托比特卡尔曼滤波器辅助自动编码器实现人体运动增强
IEEE Access. 2022;10:29233-29251. doi: 10.1109/access.2022.3157605. Epub 2022 Mar 8.
2
Joint Optimization of Kinematics and Anthropometrics for Human Motion Denoising.用于人体运动去噪的运动学与人体测量学联合优化
IEEE Sens J. 2022 Mar;22(5):4386-4399. doi: 10.1109/jsen.2022.3144946. Epub 2022 Jan 20.
3
DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors.深度动作捕捉:使用多个深度传感器和后向反射器的深度光学动作捕捉。
Sensors (Basel). 2019 Jan 11;19(2):282. doi: 10.3390/s19020282.
4
A New Quaternion-Based Kalman Filter for Human Body Motion Tracking Using the Second Estimator of the Optimal Quaternion Algorithm and the Joint Angle Constraint Method with Inertial and Magnetic Sensors.基于四元数的新卡尔曼滤波器,用于使用最优四元数算法的第二个估计器和惯性与磁传感器的关节角度约束方法进行人体运动跟踪。
Sensors (Basel). 2020 Oct 23;20(21):6018. doi: 10.3390/s20216018.
5
An Enhanced Joint Hilbert Embedding-Based Metric to Support Mocap Data Classification with Preserved Interpretability.基于增强型联合希尔伯特嵌入的度量方法,支持具有可解释性保持的运动捕捉数据分类。
Sensors (Basel). 2021 Jun 29;21(13):4443. doi: 10.3390/s21134443.
6
Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors.基于多 RGB-D 传感器融合多组不精确骨骼数据的无标记 3D 骨骼跟踪算法。
Sensors (Basel). 2022 Apr 20;22(9):3155. doi: 10.3390/s22093155.
7
Deep Multimodal Fusion Autoencoder for Saliency Prediction of RGB-D Images.用于RGB-D图像显著性预测的深度多模态融合自动编码器
Comput Intell Neurosci. 2021 May 5;2021:6610997. doi: 10.1155/2021/6610997. eCollection 2021.
8
ARMA-Based Segmentation of Human Limb Motion Sequences.基于 ARMA 的人体肢体运动序列分割。
Sensors (Basel). 2021 Aug 19;21(16):5577. doi: 10.3390/s21165577.
9
Markerless motion capture: What clinician-scientists need to know right now.无标记运动捕捉:临床科学家现在需要了解的内容。
JSAMS Plus. 2022 Oct;1. doi: 10.1016/j.jsampl.2022.100001. Epub 2022 Nov 14.
10
An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks.基于二维图像特征点的高效三维人体姿态检索与重建。
Sensors (Basel). 2021 Apr 1;21(7):2415. doi: 10.3390/s21072415.

引用本文的文献

1
Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework.基于物联网的多模态深度学习框架实现智能日常生活活动识别
Sensors (Basel). 2023 Sep 16;23(18):7927. doi: 10.3390/s23187927.
2
A Review of Depth-Based Human Motion Enhancement: Past and Present.基于深度的人体运动增强综述:过去与现在
IEEE J Biomed Health Inform. 2024 Feb;28(2):633-644. doi: 10.1109/JBHI.2023.3257662. Epub 2024 Feb 5.

本文引用的文献

1
Spatiotemporal Gait Measurement With a Side-View Depth Sensor Using Human Joint Proposals.基于人体关节点建议的侧视深度传感器的时空步态测量。
IEEE J Biomed Health Inform. 2021 May;25(5):1758-1769. doi: 10.1109/JBHI.2020.3024925. Epub 2021 May 11.
2
Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks.基于磁感应运动信号和深度递归神经网络的人体活动识别。
Nat Commun. 2020 Mar 25;11(1):1551. doi: 10.1038/s41467-020-15086-2.
3
Spatio-Temporal Manifold Learning for Human Motions via Long-Horizon Modeling.
通过长时建模实现人体运动的时空流形学习
IEEE Trans Vis Comput Graph. 2021 Jan;27(1):216-227. doi: 10.1109/TVCG.2019.2936810. Epub 2020 Nov 24.
4
OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields.OpenPose:基于部件亲和力字段的实时多人 2D 姿态估计。
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):172-186. doi: 10.1109/TPAMI.2019.2929257. Epub 2020 Dec 4.
5
Expanding instrumented gait testing in the community setting: A portable, depth-sensing camera captures joint motion in older adults.在社区环境中扩展仪器化步态测试:便携式深度感应相机可捕捉老年人的关节运动。
PLoS One. 2019 May 15;14(5):e0215995. doi: 10.1371/journal.pone.0215995. eCollection 2019.
6
Clinical assessment of depth sensor based pose estimation algorithms for technology supervised rehabilitation applications.基于深度传感器的姿势估计算法在技术监督康复应用中的临床评估。
Int J Med Inform. 2019 Jan;121:30-38. doi: 10.1016/j.ijmedinf.2018.11.001. Epub 2018 Nov 8.
7
Kinect-Based In-Home Exercise System for Lymphatic Health and Lymphedema Intervention.基于Kinect的居家淋巴健康与淋巴水肿干预运动系统。
IEEE J Transl Eng Health Med. 2018 Oct 12;6:4100313. doi: 10.1109/JTEHM.2018.2859992. eCollection 2018.
8
System for automatic gait analysis based on a single RGB-D camera.基于单目 RGB-D 相机的自动步态分析系统。
PLoS One. 2018 Aug 3;13(8):e0201728. doi: 10.1371/journal.pone.0201728. eCollection 2018.
9
Validation of enhanced kinect sensor based motion capturing for gait assessment.基于增强型Kinect传感器的运动捕捉用于步态评估的验证。
PLoS One. 2017 Apr 14;12(4):e0175813. doi: 10.1371/journal.pone.0175813. eCollection 2017.
10
Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology.使用3D运动捕捉技术对步态评估模块进行评估。
Int J Ther Massage Bodywork. 2017 Mar 10;10(1):3-9. eCollection 2017 Mar.