Posner Charli, Sánchez-Mompó Adrián, Mavromatis Ioannis, Al-Ani Mustafa
Bristol Research and Innovation Laboratory, Toshiba Europe Ltd., 32 Queen Square, Bristol, BS1 4ND, United Kingdom.
Data Brief. 2023 Jun 22;49:109334. doi: 10.1016/j.dib.2023.109334. eCollection 2023 Aug.
A dataset of body tracking information is presented. The dataset consists of 315 captured walking sequences. Each sequence is simultaneously captured by two Azure Kinect devices. The two captures are interleaved to effectively double the frame rate. Fifteen participants partook in this experiment. Each experiment consists of seven walking actions, and having three predefined trajectories per experiment. That results in 21 sequences per participant. The data were collected using the Azure Kinect Sensor SDK. They were later processed using the official tools and libraries provided by Microsoft. For each sequence and trajectory, the positions and orientations of thirty-two tracked joints were obtained and saved. The dataset is structured as follows. The experiments from each subject are saved in a single directory. Each directory contains multiple JSON files of timestamped body tracking information to enable the fusion of the two device streams. A calibration file is also provided, enabling the mapping of the coordinates between the two Azure Kinect devices capturing the data (mapping the coordinates of the device known as the Subordinate device to the Master device coordinate system). This data can be used to train neural networks for human motion prediction tasks or test pre-existing algorithms on Azure Kinect data. This dataset could also aid in gait recognition and analysis, as well as in performing action recognition and other surveillance activities. The dataset can be found at https://zenodo.org/record/7997856.
本文展示了一个人体跟踪信息数据集。该数据集包含315个捕获的行走序列。每个序列由两个Azure Kinect设备同时捕获。两次捕获交错进行,有效地将帧率提高了一倍。15名参与者参与了该实验。每个实验包含七个行走动作,每个实验有三条预定义轨迹。这导致每个参与者有21个序列。数据使用Azure Kinect传感器软件开发工具包收集。随后使用微软提供的官方工具和库进行处理。对于每个序列和轨迹,获取并保存了32个跟踪关节的位置和方向。数据集的结构如下。每个受试者的实验保存在一个单独的目录中。每个目录包含多个带有时间戳的人体跟踪信息的JSON文件,以实现两个设备流的融合。还提供了一个校准文件,用于映射捕获数据的两个Azure Kinect设备之间的坐标(将称为从属设备的坐标映射到主设备坐标系)。这些数据可用于训练用于人体运动预测任务的神经网络,或在Azure Kinect数据上测试现有的算法。该数据集还可有助于步态识别和分析,以及进行动作识别和其他监视活动。该数据集可在https://zenodo.org/record/7997856上找到。