Clouthier Allison L, Ross Gwyneth B, Graham Ryan B
School of Human Kinetics, Faculty of Health Sciences, University of Ottawa, Ottawa, ON, Canada.
Front Bioeng Biotechnol. 2020 Jan 21;7:473. doi: 10.3389/fbioe.2019.00473. eCollection 2019.
Movement screens are used to assess the overall movement quality of an athlete. However, these rely on visual observation of a series of movements and subjective scoring. Data-driven methods to provide objective scoring of these movements are being developed. These currently use optical motion capture and require manual pre-processing of data to identify the start and end points of each movement. Therefore, we aimed to use deep learning techniques to automatically identify movements typically found in movement screens and assess the feasibility of performing the classification based on wearable sensor data. Optical motion capture data were collected on 417 athletes performing 13 athletic movements. We trained an existing deep neural network architecture that combines convolutional and recurrent layers on a subset of 278 athletes. A validation subset of 69 athletes was used to tune the hyperparameters and the final network was tested on the remaining 70 athletes. Simulated inertial measurement data were generated based on the optical motion capture data and the network was trained on this data for different combinations of body segments. Classification accuracy was similar for networks trained using the optical and full-body simulated inertial measurement unit data at 90.1 and 90.2%, respectively. A good classification accuracy of 85.9% was obtained using as few as three simulated sensors placed on the torso and shanks. However, using three simulated sensors on the torso and upper arms or fewer than three sensors resulted in poor accuracy. These results for simulated sensor data indicate the feasibility of classifying athletic movements using a small number of wearable sensors. This could facilitate objective data-driven methods that automatically score overall movement quality using wearable sensors to be easily implemented in the field.
动作筛查用于评估运动员的整体动作质量。然而,这些方法依赖于对一系列动作的视觉观察和主观评分。目前正在开发数据驱动的方法来对这些动作进行客观评分。这些方法目前使用光学动作捕捉,并且需要对数据进行手动预处理以识别每个动作的起点和终点。因此,我们旨在使用深度学习技术自动识别动作筛查中常见的动作,并评估基于可穿戴传感器数据进行分类的可行性。我们收集了417名运动员进行13项体育动作的光学动作捕捉数据。我们在278名运动员的子集上训练了一种结合卷积层和循环层的现有深度神经网络架构。69名运动员的验证子集用于调整超参数,最终网络在其余70名运动员上进行测试。基于光学动作捕捉数据生成了模拟惯性测量数据,并针对不同身体部位组合在这些数据上训练网络。使用光学数据和全身模拟惯性测量单元数据训练的网络的分类准确率分别为90.1%和90.2%,二者相似。仅在躯干和小腿上放置三个模拟传感器时,就获得了85.9%的良好分类准确率。然而,在躯干和上臂上使用三个模拟传感器或使用少于三个传感器时,准确率较低。这些模拟传感器数据的结果表明了使用少量可穿戴传感器对体育动作进行分类的可行性。这可以促进客观的数据驱动方法,即使用可穿戴传感器自动对整体动作质量进行评分,从而便于在现场轻松实施。