Suppr超能文献

一个使用多种摄像机和传感器获取的室内和室外行走的多样化和多模态步态数据集。

A diverse and multi-modal gait dataset of indoor and outdoor walks acquired using multiple cameras and sensors.

机构信息

Liverpool John Moores University, Liverpool, UK.

University of Sharjah, Sharjah, United Arab Emirates.

出版信息

Sci Data. 2023 May 26;10(1):320. doi: 10.1038/s41597-023-02161-8.

Abstract

Gait datasets are often limited by a lack of diversity in terms of the participants, appearance, viewing angle, environments, annotations, and availability. We present a primary gait dataset comprising 1,560 annotated casual walks from 64 participants, in both indoor and outdoor real-world environments. We used two digital cameras and a wearable digital goniometer to capture visual as well as motion signal gait-data respectively. Traditional methods of gait identification are often affected by the viewing angle and appearance of the participant therefore, this dataset mainly considers the diversity in various aspects (e.g., participants' attributes, background variations, and view angles). The dataset is captured from 8 viewing angles in 45° increments along-with alternative appearances for each participant, for example, via a change of clothing. The dataset provides 3,120 videos, containing approximately 748,800 image frames with detailed annotations including approximately 56,160,000 bodily keypoint annotations, identifying 75 keypoints per video frame, and approximately 1,026,480 motion data points captured from a digital goniometer for three limb segments (thigh, upper arm, and head).

摘要

步态数据集通常在参与者、外观、视角、环境、注释和可用性方面缺乏多样性。我们提出了一个主要的步态数据集,包含 64 名参与者在室内和室外真实环境中进行的 1560 次注释的休闲行走。我们使用了两个数码相机和一个可穿戴数字量角器来分别捕获视觉和运动信号步态数据。传统的步态识别方法通常受到参与者视角和外观的影响,因此,这个数据集主要考虑了各个方面的多样性(例如,参与者的属性、背景变化和视角)。该数据集从 8 个视角以 45°的增量进行采集,并为每个参与者提供了不同的外观,例如通过更换服装。该数据集提供了 3120 个视频,包含大约 748800 个图像帧,具有详细的注释,包括每个视频帧大约 56160000 个身体关键点注释,识别 75 个关键点,以及从数字量角器捕获的三个肢体段(大腿、上臂和头部)的大约 1026480 个运动数据点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d03a/10220063/9276c00259e3/41597_2023_2161_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验