Yang Zhibo, Mondal Sounak, Ahn Seoyoung, Zelinsky Gregory, Hoai Minh, Samaras Dimitris
Stony Brook University, Stony Brook, NY 11794, USA.
Comput Vis ECCV. 2022 Oct;13664:52-68. doi: 10.1007/978-3-031-19772-7_4. Epub 2022 Oct 23.
The prediction of human gaze behavior is important for building human-computer interaction systems that can anticipate the user's attention. Computer vision models have been developed to predict the fixations made by people as they search for target objects. But what about when the target is not in the image? Equally important is to know how people search when they cannot find a target, and when they would stop searching. In this paper, we propose a data-driven computational model that addresses the search-termination problem and predicts the scanpath of search fixations made by people searching for targets that do not appear in images. We model visual search as an imitation learning problem and represent the internal knowledge that the viewer acquires through fixations using a novel state representation that we call . FFMs integrate a simulated foveated retina into a pretrained ConvNet that produces an in-network feature pyramid, all with minimal computational overhead. Our method integrates FFMs as the state representation in inverse reinforcement learning. Experimentally, we improve the state of the art in predicting human target-absent search behavior on the COCO-Search18 dataset. Code is available at: https://github.com/cvlab-stonybrook/Target-absent-Human-Attention.
预测人类的注视行为对于构建能够预判用户注意力的人机交互系统至关重要。计算机视觉模型已被开发用于预测人们在搜索目标物体时的注视点。但是当目标不在图像中时会怎样呢?同样重要的是要了解人们在找不到目标时如何搜索,以及他们何时会停止搜索。在本文中,我们提出了一种数据驱动的计算模型,该模型解决了搜索终止问题,并预测了人们在搜索图像中未出现的目标时的注视扫描路径。我们将视觉搜索建模为一个模仿学习问题,并使用一种新颖的状态表示来表示观察者通过注视获得的内部知识,我们将其称为FFM。FFM将模拟的中央凹视网膜集成到一个预训练的卷积神经网络中,该网络会生成一个网络内特征金字塔,所有这些操作的计算开销都最小。我们的方法将FFM作为逆强化学习中的状态表示进行集成。通过实验,我们在COCO-Search18数据集上预测人类无目标搜索行为方面改进了现有技术水平。代码可在以下网址获取:https://github.com/cvlab-stonybrook/Target-absent-Human-Attention 。