Suppr超能文献

一个包含交互感知中以自我为中心和以外部为中心视角手部的数据集。

A dataset of egocentric and exocentric view hands in interactive senses.

作者信息

Cui Cui, Sunar Mohd Shahrizal, Su Goh Eg

机构信息

Media and Game Innovation Centre of Excellence (MaGICX), Institute of Human Centered Engineering (iHumEn), Faculty of Computing, Universiti Teknologi Malaysia, 81310, Skudai, Johor Bahru, Malaysia.

Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia.

出版信息

Data Brief. 2024 Oct 9;57:111003. doi: 10.1016/j.dib.2024.111003. eCollection 2024 Dec.

Abstract

The dataset presents raw data on the egocentric (first-person view) and exocentric (third-person view) perspectives, including 47166 frame images. Egocentric and exocentric frame images are recorded from original iPhone videos simultaneously. The egocentric view captures the details of proximity hand gestures and attentiveness of the iPhone wearer, while the exocentric view captures the hand gestures in the top-down view of all participants. The data provides frame images of two, three, and four people engaged in interactive games such as Poker, Checkers, and Dice. Furthermore, the data was collected in the real environment under natural, white, yellow, and dim light conditions. The dataset contains diverse hand gestures, including remarkable instances such as motion blur, extremely deformed, sharp shadows, and extremely dim light. Moreover, researchers working on artificial intelligence (AI) interaction games in extended reality can create sub-datasets from the metadata for one or both perspectives in the egocentric or exocentric views, facilitating the AI understanding of hand gestures in human interactive games. Furthermore, researchers can extract hand gestures considered relevant studies for hand-object interaction, such as hands deformed by holding a chess piece, blurred hand gripping containers at Dice, and hands obscured by playing cards. Researchers can annotate rectangular boxes, and hand edges for semi-supervised and supervised hand detection, hand segmentation, and hand classification to improve the ability of the AI to distinguish between each player's hand gestures. Unsupervised, self-supervised research can also be done directly using this dataset.

摘要

该数据集呈现了以自我为中心(第一人称视角)和以他人为中心(第三人称视角)的原始数据,包括47166帧图像。以自我为中心和以他人为中心的帧图像是同时从原始iPhone视频中录制的。以自我为中心的视角捕捉了iPhone佩戴者近距离手部动作的细节和专注度,而以他人为中心的视角则从所有参与者的俯视角度捕捉手部动作。数据提供了参与扑克、跳棋和骰子等互动游戏的两人、三人及四人的帧图像。此外,数据是在自然、白色、黄色和暗光条件下的真实环境中收集的。该数据集包含各种手部动作,包括运动模糊、极度变形、清晰的阴影和极暗光等显著情况。此外,从事扩展现实中人工智能(AI)互动游戏研究的人员可以从元数据中为以自我为中心或以他人为中心视角中的一个或两个视角创建子数据集,便于AI理解人类互动游戏中的手部动作。此外,研究人员可以提取与手-物体交互相关研究的手部动作,例如拿着棋子变形的手、骰子游戏中握住容器的模糊手部以及被扑克牌遮挡的手。研究人员可以标注矩形框和手部边缘,用于半监督和监督式手部检测、手部分割和手部分类,以提高AI区分每个玩家手部动作的能力。也可以直接使用这个数据集进行无监督、自监督研究。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/166b/11528682/b5c439640d13/gr1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验