• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一个关于虚拟环境中视觉任务期间头部和眼睛配对运动的数据集。

A dataset of paired head and eye movements during visual tasks in virtual environments.

作者信息

Rubow Colin, Tsai Chia-Hsuan, Brewer Eric, Mattson Connor, Brown Daniel S, Zhang Haohan

机构信息

Department of Mechanical Engineering, University of Utah, Salt Lake City, 84112, USA.

Robotics Center, University of Utah, Salt Lake City, 84112, USA.

出版信息

Sci Data. 2024 Dec 5;11(1):1328. doi: 10.1038/s41597-024-04184-1.

DOI:10.1038/s41597-024-04184-1
PMID:39639071
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11621368/
Abstract

We describe a multimodal dataset of paired head and eye movements acquired in controlled virtual reality environments. Our dataset includes head and eye movement for n = 25 participants who interacted with four different virtual reality environments that required coordinated head and eye behaviors. Our data collection involved two visual tracking tasks and two visual searching tasks. Each participant performed each task three times, resulting in approximately 1080 seconds of paired head and eye movement and 129,611 data samples of paired head and eye rotations per participant. This dataset enables research into predictive models of intended head movement conditioned on gaze for augmented and virtual reality experiences, as well as assistive devices like powered exoskeletons for individuals with head-neck mobility limitations. This dataset also allows biobehavioral and mechanism studies of the variability in head and eye movement across different participants and tasks. The virtual environment developed for this data collection is open sourced and thus available for others to perform their own data collection and modify the environment.

摘要

我们描述了一个在受控虚拟现实环境中获取的头部和眼睛运动配对的多模态数据集。我们的数据集包括n = 25名参与者的头部和眼睛运动,这些参与者与四个需要协调头部和眼睛行为的不同虚拟现实环境进行了交互。我们的数据收集涉及两项视觉跟踪任务和两项视觉搜索任务。每个参与者对每项任务执行三次,每位参与者产生了大约1080秒的头部和眼睛运动配对以及129611个头部和眼睛旋转配对的数据样本。该数据集有助于研究基于注视的预期头部运动预测模型,用于增强现实和虚拟现实体验,以及为头部颈部活动受限的个体提供诸如动力外骨骼等辅助设备。该数据集还允许对不同参与者和任务中头部和眼睛运动的变异性进行生物行为和机制研究。为此次数据收集开发的虚拟环境是开源的,因此可供其他人进行自己的数据收集并修改该环境。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/ab297c0929b7/41597_2024_4184_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/33ab9af4efd9/41597_2024_4184_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/57152a10dbbf/41597_2024_4184_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/5fd1a7761ffd/41597_2024_4184_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/acd1259914e0/41597_2024_4184_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/78dd8c5efd8a/41597_2024_4184_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/ab297c0929b7/41597_2024_4184_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/33ab9af4efd9/41597_2024_4184_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/57152a10dbbf/41597_2024_4184_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/5fd1a7761ffd/41597_2024_4184_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/acd1259914e0/41597_2024_4184_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/78dd8c5efd8a/41597_2024_4184_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0441/11621368/ab297c0929b7/41597_2024_4184_Fig6_HTML.jpg

相似文献

1
A dataset of paired head and eye movements during visual tasks in virtual environments.一个关于虚拟环境中视觉任务期间头部和眼睛配对运动的数据集。
Sci Data. 2024 Dec 5;11(1):1328. doi: 10.1038/s41597-024-04184-1.
2
Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality.眼睛反映的任务:虚拟现实中的自我中心注视感知视觉任务类型识别。
IEEE Trans Vis Comput Graph. 2024 Nov;30(11):7277-7287. doi: 10.1109/TVCG.2024.3456164. Epub 2024 Oct 10.
3
EHTask: Recognizing User Tasks From Eye and Head Movements in Immersive Virtual Reality.EHTask:从沉浸式虚拟现实中的眼睛和头部运动识别用户任务。
IEEE Trans Vis Comput Graph. 2023 Apr;29(4):1992-2004. doi: 10.1109/TVCG.2021.3138902. Epub 2023 Feb 28.
4
D-SAV360: A Dataset of Gaze Scanpaths on 360° Ambisonic Videos.D-SAV360:一个关于 360°球形视频注视扫描路径的数据集。
IEEE Trans Vis Comput Graph. 2023 Nov;29(11):4350-4360. doi: 10.1109/TVCG.2023.3320237. Epub 2023 Nov 2.
5
A tutorial: Analyzing eye and head movements in virtual reality.教程:分析虚拟现实中的眼动和头动。
Behav Res Methods. 2024 Dec;56(8):8396-8421. doi: 10.3758/s13428-024-02482-5. Epub 2024 Aug 8.
6
SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction.SGaze:用于实时眼-头协调预测的基于数据的眼-头协调模型。
IEEE Trans Vis Comput Graph. 2019 May;25(5):2002-2010. doi: 10.1109/TVCG.2019.2899187. Epub 2019 Feb 18.
7
Eye and head movements while encoding and recognizing panoramic scenes in virtual reality.在虚拟现实中编码和识别全景场景时的眼动和头动。
PLoS One. 2023 Feb 17;18(2):e0282030. doi: 10.1371/journal.pone.0282030. eCollection 2023.
8
Eye and head movements in visual search in the extended field of view.视野扩展中的视觉搜索中的眼动和头动。
Sci Rep. 2024 Apr 17;14(1):8907. doi: 10.1038/s41598-024-59657-5.
9
Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities.凝视自然:用于研究日常活动中眼睛和头部协调的数据集。
Sci Rep. 2020 Feb 13;10(1):2539. doi: 10.1038/s41598-020-59251-5.
10
Gaze intersection points reveal the onset of visual processing during eye and head movement.注视交点揭示了眼动和头部运动过程中视觉处理的开始。
Annu Int Conf IEEE Eng Med Biol Soc. 2024 Jul;2024:1-4. doi: 10.1109/EMBC53108.2024.10781581.

本文引用的文献

1
Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability.虚拟现实中的眼动追踪:Vive Pro Eye的空间准确性、精度和校准可靠性
J Eye Mov Res. 2022 Sep 7;15(3). doi: 10.16910/jemr.15.3.3. eCollection 2022.
2
Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities.凝视自然:用于研究日常活动中眼睛和头部协调的数据集。
Sci Rep. 2020 Feb 13;10(1):2539. doi: 10.1038/s41598-020-59251-5.
3
Why do we move our head to look at an object in our peripheral region? Lateral viewing interferes with attentive search.
我们为什么要转头去看周边视野中的物体?侧面观察会干扰集中搜索。
PLoS One. 2014 Mar 19;9(3):e92284. doi: 10.1371/journal.pone.0092284. eCollection 2014.
4
The head tracks and gaze predicts: how the world's best batters hit a ball.头部追踪和目光预测:世界上最好的击球手如何击球。
PLoS One. 2013;8(3):e58289. doi: 10.1371/journal.pone.0058289. Epub 2013 Mar 13.
5
The interaction of visual, vestibular and extra-retinal mechanisms in the control of head and gaze during head-free pursuit.在无头部追踪过程中,视觉、前庭和眼外机制在头部和注视控制中的相互作用。
J Physiol. 2011 Apr 1;589(Pt 7):1627-42. doi: 10.1113/jphysiol.2010.199471. Epub 2011 Feb 7.
6
Coordination of the eyes and head during visual orienting.视觉定向过程中眼睛与头部的协调。
Exp Brain Res. 2008 Oct;190(4):369-87. doi: 10.1007/s00221-008-1504-8. Epub 2008 Aug 13.
7
Amplitude changes in response to target displacements during human eye-head movements.人类眼动-头部运动过程中对目标位移的振幅变化。
Vision Res. 2008 Jan;48(2):149-66. doi: 10.1016/j.visres.2007.10.029. Epub 2007 Dec 21.
8
Human eye-head co-ordination in natural exploration.自然探索中人类眼睛与头部的协调
Network. 2007 Sep;18(3):267-97. doi: 10.1080/09548980701671094.
9
Eye movements and the control of actions in everyday life.眼球运动与日常生活中的动作控制。
Prog Retin Eye Res. 2006 May;25(3):296-324. doi: 10.1016/j.preteyeres.2006.01.002. Epub 2006 Mar 3.
10
Neural network simulations of the primate oculomotor system. I. The vertical saccadic burst generator.灵长类动眼神经系统的神经网络模拟。I. 垂直扫视爆发发生器。
Biol Cybern. 1994;70(3):291-302. doi: 10.1007/BF00197610.