Suppr超能文献

在动态自然场景的大规模真实数据集里刻画并自动检测平稳跟踪。

Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes.

作者信息

Startsev Mikhail, Agtzidis Ioannis, Dorr Michael

机构信息

Human-Machine Communication, Technical University of Munich, Munich, Germany.

出版信息

J Vis. 2019 Dec 2;19(14):10. doi: 10.1167/19.14.10.

Abstract

Eye movements are fundamental to our visual experience of the real world, and tracking smooth pursuit eye movements play an important role because of the dynamic nature of our environment. Static images, however, do not induce this class of eye movements, and commonly used synthetic moving stimuli lack ecological validity because of their low scene complexity compared to the real world. Traditionally, ground truth data for pursuit analyses with naturalistic stimuli are obtained via laborious hand-labelling. Therefore, previous studies typically remained small in scale. We here present the first large-scale quantitative characterization of human smooth pursuit. In order to achieve this, we first provide a methodological framework for such analyses by collecting a large set of manual annotations for eye movements in dynamic scenes and by examining the bias and variance of human annotators. To enable further research on even larger future data sets, we also describe, improve, and thoroughly analyze a novel algorithm to automatically classify eye movements. Our approach incorporates unsupervised learning techniques and thus demonstrates improved performance with the addition of unlabelled data. The code and data related to our manual and automated eye movement annotation are publicly available via https://web.gin.g-node.org/ioannis.agtzidis/gazecom_annotations/.

摘要

眼动对于我们对现实世界的视觉体验至关重要,由于我们所处环境的动态特性,追踪平稳跟踪眼动起着重要作用。然而,静态图像不会引发这类眼动,并且常用的合成移动刺激由于与现实世界相比场景复杂度较低而缺乏生态效度。传统上,用于自然刺激追踪分析的真实数据是通过费力的人工标注获得的。因此,以往的研究规模通常较小。我们在此展示了对人类平稳跟踪的首次大规模定量表征。为了实现这一点,我们首先通过收集大量动态场景中眼动的人工标注,并检查人类标注者的偏差和方差,为这类分析提供一个方法框架。为了使未来能对更大的数据集进行进一步研究,我们还描述、改进并全面分析了一种自动分类眼动的新算法。我们的方法纳入了无监督学习技术,因此在添加未标注数据时表现出了更好的性能。与我们的人工和自动眼动标注相关的代码和数据可通过https://web.gin.g-node.org/ioannis.agtzidis/gazecom_annotations/公开获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验