School of Media and Design, Hangzhou Dianzi University, Hangzhou, Zhejiang, China.
Yangzhou Polytechnic College, Yangzhou, Jiangsu, China.
Technol Health Care. 2024;32(6):4077-4096. doi: 10.3233/THC-231860.
The objective performance evaluation of an athlete is essential to allow detailed research into elite sports. The automatic identification and classification of football teaching and training exercises overcome the shortcomings of manual analytical approaches. Video monitoring is vital in detecting human conduct acts and preventing or reducing inappropriate actions in time. The video's digital material is classified by relevance depending on those individual actions.
The research goal is to systematically use the data from an inertial measurement unit (IMU) and data from computer vision analysis for the deep Learning of football teaching motion recognition (DL-FTMR). There has been a search for many libraries. The studies included have examined and analyzed training through profound model construction learning methods. Investigations show the ability to distinguish the efficiency of qualified and less qualified officers for sport-specific video-based decision-making assessments.
Video-based research is an effective way of assessing decision-making due to the potential to present changing in-game decision-making scenarios more environmentally friendly than static picture printing. The data showed that the filtering accuracy of responses is improved without losing response time. This observation indicates that practicing with a video monitoring system offers a play view close to that seen in a game scenario. It can be an essential way to improve the perception of selection precision. This study discusses publicly accessible training datasets for Human Activity Recognition (HAR) and presents a dataset that combines various components. The study also used the UT-Interaction dataset to identify complex events.
Thus, the experimental results of DL-FTMR give a performance ratio of 94.5%, behavior processing ratio of 92.4%, athletes energy level ratio of 92.5%, interaction ratio of 91.8%, prediction ratio of 92.5%, sensitivity ratio of 93.7%, and the precision ratio of 94.86% compared to the optimized convolutional neural network (OCNN), Gaussian Mixture Model (GMM), you only look once (YOLO), Human Activity Recognition- state-of-the-art methodologies (HAR-SAM).
This finding proves that exercising a video monitoring system that provides a play view similar to that seen in a game scenario can be a valuable technique to increase selection accuracy perception.
运动员的客观绩效评估对于深入研究精英体育至关重要。足球教学和训练动作的自动识别和分类克服了手动分析方法的缺点。视频监控对于检测人体行为和及时防止或减少不当行为至关重要。视频的数字材料根据这些个体动作的相关性进行分类。
本研究旨在系统地使用惯性测量单元 (IMU) 数据和计算机视觉分析数据进行足球教学动作识别的深度学习 (DL-FTMR)。已经搜索了许多库。所包含的研究通过深入的模型构建学习方法检查和分析了培训。研究表明,对于基于运动视频的决策评估,有能力区分具有特定运动技能的合格和不合格人员的效率。
基于视频的研究是评估决策的有效方法,因为它有可能比静态图片打印更环保地呈现不断变化的比赛决策场景。数据表明,响应的过滤精度得到了提高,而响应时间没有损失。这一观察结果表明,使用视频监控系统进行练习提供了与比赛场景中所见相似的比赛视图。它可以成为提高选择精度感知的重要方法。本研究讨论了可公开访问的人类活动识别 (HAR) 培训数据集,并提出了一个结合了各种组件的数据集。该研究还使用了 UT-Interaction 数据集来识别复杂事件。
因此,与优化卷积神经网络 (OCNN)、高斯混合模型 (GMM)、只看一次 (YOLO)、人类活动识别-最新方法 (HAR-SAM) 相比,DL-FTMR 的实验结果提供了 94.5%的性能比、92.4%的行为处理比、92.5%的运动员能量水平比、91.8%的交互比、92.5%的预测比、93.7%的灵敏度比和 94.86%的精度比。
这一发现证明,练习提供类似于比赛场景中所见的比赛视图的视频监控系统可以是提高选择准确性感知的有价值技术。