• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MAP3D:一种将现实世界的眼动追踪数据自动映射到虚拟3D模型上的探索性方法。

MAP3D: An explorative approach for automatic mapping of real-world eye-tracking data on a virtual 3D model.

作者信息

Stein Isabell, Jossberger Helen, Gruber Hans

机构信息

University of Regensburg, Germany.

University of Turku, Finland.

出版信息

J Eye Mov Res. 2023 May 31;15(3). doi: 10.16910/jemr.15.3.8. eCollection 2022.

DOI:10.16910/jemr.15.3.8
PMID:39135740
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11318232/
Abstract

Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies' ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually mapped to this 2D image. This leads to a loss of information about the three-dimensionality of the stimulus. Using several reference images, from different perspectives, poses new problems, in particular concerning the mapping of fixations in the transition areas between two reference views. A newly developed approach (MAP3D) is presented that enables generating a 3D model and automatic mapping of fixations to this virtual 3D model of the stimulus. This avoids problems with the reduction to a 2D reference image and with transitions between images. The x, y and z coordinates of the fixations are available as a point cloud and as .csv output. First exploratory application and evaluation tests are promising: MAP3D offers innovative ways of post-hoc mapping fixation data on 3D stimuli with open-source software and thus provides cost-efficient new avenues for research.

摘要

移动眼动追踪有助于研究参与者可以自由移动的现实世界场景。这提高了研究的生态效度,但给分析带来了挑战。通常,三维刺激会被简化为二维图像(参考视图),注视点会被手动映射到这个二维图像上。这导致了关于刺激三维性的信息丢失。使用来自不同视角的多个参考图像会带来新的问题,特别是在两个参考视图之间的过渡区域中注视点的映射问题。本文提出了一种新开发的方法(MAP3D),该方法能够生成三维模型并将注视点自动映射到刺激的这个虚拟三维模型上。这避免了简化为二维参考图像以及图像之间过渡的问题。注视点的x、y和z坐标可以作为点云以及.csv输出文件获取。初步的探索性应用和评估测试很有前景:MAP3D提供了使用开源软件在三维刺激上进行事后映射注视点数据的创新方法,从而为研究提供了经济高效的新途径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/e4e11c7e02ac/jemr-15-03-h-figure-24.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/997ae88fdc22/jemr-15-03-h-figure-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/39f5cd43208f/jemr-15-03-h-figure-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/868aae42e456/jemr-15-03-h-figure-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/8711ccc8f751/jemr-15-03-h-figure-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/348e046b10a8/jemr-15-03-h-figure-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/83b974f29a4d/jemr-15-03-h-figure-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/04b3fe98afda/jemr-15-03-h-figure-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/431b0b1e4168/jemr-15-03-h-figure-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3182e9ae5228/jemr-15-03-h-figure-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/1735a2d0595d/jemr-15-03-h-figure-10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3ed4817aa4cb/jemr-15-03-h-figure-11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/183fac409e6c/jemr-15-03-h-figure-12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/0cd176960fde/jemr-15-03-h-figure-13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/d0e2e96c2368/jemr-15-03-h-figure-14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/c8cef1a29d33/jemr-15-03-h-figure-15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/992c25465b45/jemr-15-03-h-figure-16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/fc451b8a8065/jemr-15-03-h-figure-17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/9cc2f299be77/jemr-15-03-h-figure-18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/6ed2c0e65762/jemr-15-03-h-figure-19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/16ce7895a785/jemr-15-03-h-figure-20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/6a559bc94259/jemr-15-03-h-figure-21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3ddaec1f0777/jemr-15-03-h-figure-22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/c92b683f1f51/jemr-15-03-h-figure-23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/e4e11c7e02ac/jemr-15-03-h-figure-24.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/997ae88fdc22/jemr-15-03-h-figure-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/39f5cd43208f/jemr-15-03-h-figure-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/868aae42e456/jemr-15-03-h-figure-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/8711ccc8f751/jemr-15-03-h-figure-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/348e046b10a8/jemr-15-03-h-figure-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/83b974f29a4d/jemr-15-03-h-figure-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/04b3fe98afda/jemr-15-03-h-figure-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/431b0b1e4168/jemr-15-03-h-figure-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3182e9ae5228/jemr-15-03-h-figure-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/1735a2d0595d/jemr-15-03-h-figure-10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3ed4817aa4cb/jemr-15-03-h-figure-11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/183fac409e6c/jemr-15-03-h-figure-12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/0cd176960fde/jemr-15-03-h-figure-13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/d0e2e96c2368/jemr-15-03-h-figure-14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/c8cef1a29d33/jemr-15-03-h-figure-15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/992c25465b45/jemr-15-03-h-figure-16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/fc451b8a8065/jemr-15-03-h-figure-17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/9cc2f299be77/jemr-15-03-h-figure-18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/6ed2c0e65762/jemr-15-03-h-figure-19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/16ce7895a785/jemr-15-03-h-figure-20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/6a559bc94259/jemr-15-03-h-figure-21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/3ddaec1f0777/jemr-15-03-h-figure-22.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/c92b683f1f51/jemr-15-03-h-figure-23.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3c5/11318232/e4e11c7e02ac/jemr-15-03-h-figure-24.jpg

相似文献

1
MAP3D: An explorative approach for automatic mapping of real-world eye-tracking data on a virtual 3D model.MAP3D:一种将现实世界的眼动追踪数据自动映射到虚拟3D模型上的探索性方法。
J Eye Mov Res. 2023 May 31;15(3). doi: 10.16910/jemr.15.3.8. eCollection 2022.
2
Eye-tracking Analysis of Interactive 3D Geovisualization.交互式三维地理可视化的眼动追踪分析
J Eye Mov Res. 2017 May 31;10(3). doi: 10.16910/jemr.10.3.2.
3
From lab-based studies to eye-tracking in virtual and real worlds: conceptual and methodological problems and solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 20.8.2019.从基于实验室的研究到虚拟和现实世界中的眼动追踪:概念与方法问题及解决方案。2019年8月20日于阿利坎特举行的第20届欧洲眼动研究会议(ECEM)上的研讨会4。
J Eye Mov Res. 2019 Nov 25;12(7). doi: 10.16910/jemr.12.7.8.
4
Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.Gaze3DFix:使用椭圆边界体积检测 3D 注视点。
Behav Res Methods. 2018 Oct;50(5):2004-2015. doi: 10.3758/s13428-017-0969-4.
5
Eye movement characteristics in a mental rotation task presented in virtual reality.虚拟现实中呈现的心理旋转任务中的眼动特征。
Front Neurosci. 2023 Mar 27;17:1143006. doi: 10.3389/fnins.2023.1143006. eCollection 2023.
6
Evaluation of accuracy of photogrammetry with 3D scanning and conventional impression method for craniomaxillofacial defects using a software analysis.基于软件分析的 3D 扫描和传统印模法在颅颌面缺损中的精度评估。
Trials. 2022 Dec 27;23(1):1048. doi: 10.1186/s13063-022-07005-1.
7
Map3D: Registration-Based Multi-Object Tracking on 3D Serial Whole Slide Images.Map3D:基于配准的三维连续切片全玻片图像多目标跟踪。
IEEE Trans Med Imaging. 2021 Jul;40(7):1924-1933. doi: 10.1109/TMI.2021.3069154. Epub 2021 Jun 30.
8
Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images.基于二维超声与三维术前图像实时配准的肝脏移动病灶位置跟踪
Med Phys. 2015 Jan;42(1):335-47. doi: 10.1118/1.4903945.
9
Recording human electrocorticographic (ECoG) signals for neuroscientific research and real-time functional cortical mapping.记录用于神经科学研究和实时功能性皮层图谱绘制的人类皮层脑电图(ECoG)信号。
J Vis Exp. 2012 Jun 26(64):3993. doi: 10.3791/3993.
10
Erratum: Eyestalk Ablation to Increase Ovarian Maturation in Mud Crabs.勘误:切除眼柄以增加泥蟹的卵巢成熟度。
J Vis Exp. 2023 May 26(195). doi: 10.3791/6561.

引用本文的文献

1
The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.眼动追踪基础 第4部分:进行眼动追踪研究的工具。
Behav Res Methods. 2025 Jan 6;57(1):46. doi: 10.3758/s13428-024-02529-7.

本文引用的文献

1
Investigating visual expertise in sculpture: A methodological approach using eye tracking.探究雕塑领域的视觉专业技能:一种使用眼动追踪的方法。
J Eye Mov Res. 2022 Jun 30;15(2). doi: 10.16910/jemr.15.2.5. eCollection 2022.
2
Accurate, dense, and robust multiview stereopsis.精确、密集且鲁棒的多视图立体视觉。
IEEE Trans Pattern Anal Mach Intell. 2010 Aug;32(8):1362-76. doi: 10.1109/TPAMI.2009.161.