Hatamzadeh Mehran, Busé Laurent, Chorin Frédéric, Alliez Pierre, Favreau Jean-Dominique, Zory Raphael
Université Côte d'Azur, LAMHESS, Nice, France; Université Côte d'Azur, Inria, Sophia Antipolis, France; Université Côte d'Azur, CHU, Cimiez, Plateforme fragilité, Nice, France.
Université Côte d'Azur, Inria, Sophia Antipolis, France.
J Biomech. 2022 Dec;145:111358. doi: 10.1016/j.jbiomech.2022.111358. Epub 2022 Oct 26.
The emergence of RGB-D cameras and the development of pose estimation algorithms offer opportunities in biomechanics. However, some challenges still remain when using them for gait analysis, including noise which leads to misidentification of gait events and inaccuracy. Therefore, we present a novel kinematic-geometric model for spatio-temporal gait analysis, based on ankles' trajectory in the frontal plane and distance-to-camera data (depth). Our approach consists of three main steps: identification of the gait pattern and modeling via parameterized curves, development of a fitting algorithm, and computation of locomotive indices. The proposed fitting algorithm applies on both ankles' depth data simultaneously, by minimizing through numerical optimization some geometric and biomechanical error functions. For validation, 15 subjects were asked to walk inside the walkway of the OptoGait, while the OptoGait and an RGB-D camera (Microsoft Azure Kinect) were both recording. Then, the spatio-temporal parameters of both feet were computed using the OptoGait and the proposed model. Validation results show that the proposed model yields good to excellent absolute statistical agreement (0.86 ≤ R ≤ 0.99). Our kinematic-geometric model offers several benefits: (1) It relies only on the ankles' depth trajectory both for gait events extraction and spatio-temporal parameters' calculation; (2) it is usable with any kind of RGB-D camera or even with 3D marker-based motion analysis systems in absence of toes' and heels' markers; and (3) it enables improving the results by denoising and smoothing the ankles' depth trajectory. Hence, the proposed kinematic-geometric model facilitates the development of portable markerless systems for accurate gait analysis.
RGB-D相机的出现和姿态估计算法的发展为生物力学带来了机遇。然而,在将它们用于步态分析时仍存在一些挑战,包括会导致步态事件误识别和不准确的噪声。因此,我们提出了一种用于时空步态分析的新型运动学 - 几何模型,该模型基于踝关节在额平面上的轨迹以及到相机的距离数据(深度)。我们的方法包括三个主要步骤:通过参数化曲线识别步态模式并进行建模、开发拟合算法以及计算运动指标。所提出的拟合算法通过数值优化最小化一些几何和生物力学误差函数,同时应用于两个踝关节的深度数据。为了进行验证,让15名受试者在OptoGait的通道内行走,同时OptoGait和一台RGB-D相机(微软Azure Kinect)都进行记录。然后,使用OptoGait和所提出的模型计算双脚的时空参数。验证结果表明,所提出的模型产生了良好到优异的绝对统计一致性(0.86≤R≤0.99)。我们的运动学 - 几何模型具有几个优点:(1)它在步态事件提取和时空参数计算方面仅依赖于踝关节的深度轨迹;(2)在没有脚趾和脚跟标记的情况下,它可与任何类型的RGB-D相机甚至基于3D标记的运动分析系统一起使用;(3)它能够通过对踝关节深度轨迹进行去噪和平滑来改善结果。因此,所提出的运动学 - 几何模型有助于开发用于精确步态分析的便携式无标记系统。