Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe 650-0017, Japan.
Sensors (Basel). 2023 Jul 16;23(14):6445. doi: 10.3390/s23146445.
Substantial advancements in markerless motion capture accuracy exist, but discrepancies persist when measuring joint angles compared to those taken with a goniometer. This study integrates machine learning techniques with markerless motion capture, with an aim to enhance this accuracy. Two artificial intelligence-based libraries-MediaPipe and LightGBM-were employed in executing markerless motion capture and shoulder abduction angle estimation. The motion of ten healthy volunteers was captured using smartphone cameras with right shoulder abduction angles ranging from 10° to 160°. The cameras were set diagonally at 45°, 30°, 15°, 0°, -15°, or -30° relative to the participant situated at a distance of 3 m. To estimate the abduction angle, machine learning models were developed considering the angle data from the goniometer as the ground truth. The model performance was evaluated using the coefficient of determination R and mean absolute percentage error, which were 0.988 and 1.539%, respectively, for the trained model. This approach could estimate the shoulder abduction angle, even if the camera was positioned diagonally with respect to the object. Thus, the proposed models can be utilized for the real-time estimation of shoulder motion during rehabilitation or sports motion.
无标记运动捕捉的准确性已经取得了实质性的进展,但与角度计相比,测量关节角度时仍然存在差异。本研究将机器学习技术与无标记运动捕捉相结合,旨在提高这种准确性。本研究使用基于人工智能的 MediaPipe 和 LightGBM 两个库来执行无标记运动捕捉和肩部外展角度估计。使用智能手机相机捕捉了十名健康志愿者的运动,右肩外展角度范围为 10°至 160°。相机相对于距离 3 米的参与者以 45°、30°、15°、0°、-15°或-30°的对角线角度设置。为了估计外展角度,考虑到来自角度计的角度数据作为地面实况,开发了机器学习模型。使用确定系数 R 和平均绝对百分比误差来评估模型性能,训练后的模型分别为 0.988 和 1.539%。即使相机相对于物体以对角线方式定位,该方法也可以估计肩部外展角度。因此,所提出的模型可用于康复或运动期间肩部运动的实时估计。