Ruescas-Nicolau Ana Virginia, Medina-Ripoll Enrique José, Parrilla Bernabé Eduardo, de Rosario Martínez Helios
Instituto de Biomecánica - IBV, Universitat Politècnica de València, Edificio 9C. Camí de Vera s/n, 46022 Valencia, Spain.
Data Brief. 2024 Feb 6;53:110157. doi: 10.1016/j.dib.2024.110157. eCollection 2024 Apr.
In this paper, we present a dataset that takes 2D and 3D human pose keypoints estimated from images and relates them to the location of 3D anatomical landmarks. The dataset contains 51,051 poses obtained from 71 persons in A-Pose while performing 7 movements (walking, running, squatting, and four types of jumping). These poses were scanned to build a collection of 3D moving textured meshes with anatomical correspondence. Each mesh in that collection was used to obtain the 3D locations of 53 anatomical landmarks, and 48 images were created using virtual cameras with different perspectives. 2D pose keypoints from those images were obtained using the MediaPipe Human Pose Landmarker, and their corresponding 3D keypoints were calculated by linear triangulation. The dataset consists of a folder for each participant containing two Track Row Column (TRC) files and one JSON file for each movement sequence. One TRC file is used to store the 3D data of the triangulated 3D keypoints while the other contains the 3D anatomical landmarks. The JSON file is used to store the 2D keypoints and the calibration parameters of the virtual cameras. The anthropometric characteristics of the participants are annotated in a single CSV file. These data are intended to be used in developments that require the transformation of existing human pose solutions in computer vision into biomechanical applications or simulations. This dataset can also be used in other applications related to training neural networks for human motion analysis and studying their influence on anthropometric characteristics.
在本文中,我们展示了一个数据集,该数据集获取从图像中估计出的2D和3D人体姿态关键点,并将它们与3D解剖学标志点的位置相关联。该数据集包含从71人在A姿态下执行7种动作(行走、跑步、下蹲和四种跳跃类型)时获得的51,051个姿态。对这些姿态进行扫描,以构建具有解剖学对应关系的3D移动纹理网格集合。该集合中的每个网格用于获取53个解剖学标志点的3D位置,并使用具有不同视角的虚拟相机创建48张图像。使用MediaPipe人体姿态地标检测器从这些图像中获取2D姿态关键点,并通过线性三角测量法计算其相应的3D关键点。该数据集由每个参与者的一个文件夹组成,每个运动序列包含两个轨迹行-列(TRC)文件和一个JSON文件。一个TRC文件用于存储三角测量后的3D关键点的3D数据,而另一个包含3D解剖学标志点。JSON文件用于存储2D关键点和虚拟相机的校准参数。参与者的人体测量特征在一个单独的CSV文件中进行注释。这些数据旨在用于需要将计算机视觉中现有的人体姿态解决方案转换为生物力学应用或模拟的开发中。该数据集还可用于与训练用于人体运动分析的神经网络以及研究它们对人体测量特征的影响相关的其他应用。