Tits Mickaël, Laraba Sohaïb, Caulier Eric, Tilmanne Joëlle, Dutoit Thierry
Numediart Institute, University of Mons, Belgium.
University of Nice Sophia Antipolis, Nice, France.
Data Brief. 2018 May 23;19:1214-1221. doi: 10.1016/j.dib.2018.05.088. eCollection 2018 Aug.
In this article, we present a large 3D motion capture dataset of Taijiquan martial art gestures ( = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a scale of [0-10]. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated optical motion capture system of 11 cameras that tracks 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless time-of-flight depth sensor that tracks 25 locations of a person׳s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this dataset. This article details the recording protocol as well as the processing and annotation procedures. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) [1] on a part of the dataset to extract morphology-independent motion features for skill evaluation. Results of this analysis are presented in their communication: "Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation" (10.1145/3077981.3078037) [1]. Data are available for research purpose (license CC BY-NC-SA 4.0), at https://github.com/numediart/UMONS-TAICHI.
在本文中,我们展示了一个大型太极拳武术手势三维动作捕捉数据集( = 2200个样本),该数据集包含由12名不同技能水平的参与者执行的13类(相对于太极拳技术)动作。参与者的水平由三位专家按[0 - 10]的等级进行排名。该数据集使用两个动作捕捉系统同时进行捕捉:1)Qualisys,一个由11台相机组成的精密光学动作捕捉系统,以179Hz的频率跟踪68个反光标记;2)微软Kinect V2,一个低成本的无标记飞行时间深度传感器,以30Hz的频率跟踪人体骨骼的25个位置。来自两个系统的数据通过手动进行同步。Qualisys数据经过手动校正,然后进行处理以补齐任何缺失数据。数据还进行了手动注释以进行分割。本数据集中同时提供了分割后和未分割的数据。本文详细介绍了记录协议以及处理和注释程序。这些数据最初是为手势识别和技能评估而记录的,但它们也适用于合成、分割、多传感器数据比较与融合、运动科学或更广泛的人类科学或动作捕捉研究。Tits等人(2017年)[1]对该数据集的一部分进行了初步分析,以提取用于技能评估的与形态无关的动作特征。该分析结果发表在他们的论文《用于手势评估的动作捕捉数据库中的形态无关特征工程》(10.1145/3077981.3078037)[1]中。数据可用于研究目的(许可协议CC BY - NC - SA 4.0),网址为https://github.com/numediart/UMONS - TAICHI 。