Institute of Computer Science, Pedagogical University of Krakow, 2 Podchorazych Ave, 30-084 Krakow, Poland.
AGH University of Science and Technology, Cryptography and Cognitive Informatics Research Group, 30 Mickiewicza Ave, 30-059 Krakow, Poland.
Sensors (Basel). 2017 Nov 10;17(11):2590. doi: 10.3390/s17112590.
The aim of this paper is to propose and evaluate the novel method of template generation, matching, comparing and visualization applied to motion capture (kinematic) analysis. To evaluate our approach, we have used motion capture recordings (MoCap) of two highly-skilled black belt karate athletes consisting of 560 recordings of various karate techniques acquired with wearable sensors. We have evaluated the quality of generated templates; we have validated the matching algorithm that calculates similarities and differences between various MoCap data; and we have examined visualizations of important differences and similarities between MoCap data. We have concluded that our algorithms works the best when we are dealing with relatively short (2-4 s) actions that might be averaged and aligned with the dynamic time warping framework. In practice, the methodology is designed to optimize the performance of some full body techniques performed in various sport disciplines, for example combat sports and martial arts. We can also use this approach to generate templates or to compare the correct performance of techniques between various top sportsmen in order to generate a knowledge base of reference MoCap videos. The motion template generated by our method can be used for action recognition purposes. We have used the DTW classifier with angle-based features to classify various karate kicks. We have performed leave-one-out action recognition for the Shorin-ryu and Oyama karate master separately. In this case, 100 % actions were correctly classified. In another experiment, we used templates generated from Oyama master recordings to classify Shorin-ryu master recordings and vice versa. In this experiment, the overall recognition rate was 94.2 % , which is a very good result for this type of complex action.
本文旨在提出并评估一种新的模板生成、匹配、比较和可视化方法,应用于运动捕捉(运动学)分析。为了评估我们的方法,我们使用了两名高水平黑带空手道运动员的运动捕捉记录(MoCap),这些记录由 560 个各种空手道技术的记录组成,这些技术是使用可穿戴传感器获取的。我们评估了生成模板的质量;我们验证了计算各种 MoCap 数据之间相似性和差异的匹配算法;我们还检查了 MoCap 数据之间重要差异和相似性的可视化。我们得出的结论是,当我们处理相对较短(2-4 秒)的动作时,我们的算法效果最好,这些动作可以进行平均并与动态时间 warping 框架对齐。在实践中,该方法旨在优化各种运动学科中执行的一些全身技术的性能,例如格斗运动和武术。我们还可以使用这种方法生成模板或比较不同顶级运动员之间技术的正确表现,以生成参考 MoCap 视频知识库。我们的方法生成的运动模板可用于动作识别目的。我们使用基于角度的特征的 DTW 分类器对各种空手道踢腿进行分类。我们分别对 Shorin-ryu 和 Oyama 空手道大师进行了一次留一动作识别。在这种情况下,100%的动作都被正确分类。在另一个实验中,我们使用 Oyama 大师记录生成的模板对 Shorin-ryu 大师记录进行分类,反之亦然。在这个实验中,总体识别率为 94.2%,对于这种复杂的动作来说是一个非常好的结果。