Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States of America.
Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America.
PLoS Comput Biol. 2021 Apr 23;17(4):e1008935. doi: 10.1371/journal.pcbi.1008935. eCollection 2021 Apr.
Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s-1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.
人体步态分析常用于临床和基础研究,但许多常用方法(如三维运动捕捉、可穿戴设备)价格昂贵、不灵活、数据有限且需要专业知识。基于视频的姿势估计的最新进展表明,使用从易于获取的设备(如智能手机)收集的二维视频进行步态分析具有潜力。迄今为止,已经有几项研究使用无标记姿势估计提取了人体步态特征。然而,我们目前缺乏使用广泛的步态参数的人体步态数据集以及从视频进行步态分析的工作流程来评估基于视频的方法。在这里,我们将 OpenPose(基于开源视频的人体姿势估计)测量的时空和矢状面运动学步态参数与健康成年人在地面行走时同时记录的三维运动捕捉进行了比较。在评估行走过程中的所有单个步骤时,我们观察到运动捕捉和 OpenPose 之间的时间步态参数(即步时、站立时、摆动时和双支撑时)的平均绝对误差为 0.02 s,步长的平均绝对误差为 0.049 m。当将时空步态参数计算为个体参与者平均值时,准确性会提高:时间步态参数的平均绝对误差为 0.01 s,步长的平均绝对误差为 0.018 m。运动捕捉和 OpenPose 之间的步态速度最大差异小于 0.10 m s-1。运动捕捉和 OpenPose 之间矢状面髋、膝和踝关节角度的平均绝对误差分别为 4.0°、5.6°和 7.4°。我们的分析工作流程是免费的,涉及的用户输入最少,并且不需要事先具备步态分析专业知识。最后,我们为未来基于姿势估计的人体步态分析应用提供了建议和考虑因素。