Suppr超能文献

场景上下文自动驱动物体变换的预测。

Scene context automatically drives predictions of object transformations.

机构信息

Donders Institute for Brain, Cognition and Behaviour, Radboud University, Thomas van Aquinostraat 4, Nijmegen 6525 GD, the Netherlands; Department of Psychology, Amsterdam Brain & Cognition Center, University of Amsterdam, Nieuwe Achtergracht 129-B, Amsterdam 1018 WS, the Netherlands.

Donders Institute for Brain, Cognition and Behaviour, Radboud University, Thomas van Aquinostraat 4, Nijmegen 6525 GD, the Netherlands; Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands.

出版信息

Cognition. 2023 Sep;238:105521. doi: 10.1016/j.cognition.2023.105521. Epub 2023 Jun 22.

Abstract

As our viewpoint changes, the whole scene around us rotates coherently. This allows us to predict how one part of a scene (e.g., an object) will change by observing other parts (e.g., the scene background). While human object perception is known to be strongly context-dependent, previous research has largely focused on how scene context can disambiguate fixed object properties, such as identity (e.g., a car is easier to recognize on a road than on a beach). It remains an open question whether object representations are updated dynamically based on the surrounding scene context, for example across changes in viewpoint. Here, we tested whether human observers dynamically and automatically predict the appearance of objects based on the orientation of the background scene. In three behavioral experiments (N = 152), we temporarily occluded objects within scenes that rotated. Upon the objects' reappearance, participants had to perform a perceptual discrimination task, which did not require taking the scene rotation into account. Performance on this orthogonal task strongly depended on whether objects reappeared rotated coherently with the surrounding scene or not. This effect persisted even when a majority of trials violated this real-world contingency between scene and object, showcasing the automaticity of these scene-based predictions. These findings indicate that contextual information plays an important role in predicting object transformations in structured real-world environments.

摘要

当我们的视角发生变化时,我们周围的整个场景都会连贯地旋转。这使我们能够通过观察其他部分(例如场景背景)来预测场景中某一部分(例如一个物体)将如何变化。虽然人类的物体感知被认为是强烈依赖于上下文的,但之前的研究主要集中在场景上下文如何消除固定物体属性的歧义上,例如身份(例如,一辆汽车在道路上比在海滩上更容易识别)。物体的表示是否会根据周围的场景上下文(例如,在视角变化时)动态更新,这仍然是一个悬而未决的问题。在这里,我们测试了人类观察者是否会根据背景场景的方向动态且自动地预测物体的外观。在三个行为实验中(N=152),我们暂时遮挡了旋转场景中的物体。当物体重新出现时,参与者必须执行一个感知辨别任务,该任务不需要考虑场景的旋转。这个正交任务的表现强烈依赖于物体是否与周围场景一致地旋转重新出现,即使大多数试验违反了场景和物体之间的这种现实世界的连续性,也展示了这些基于场景的预测的自动性。这些发现表明,上下文信息在预测结构化现实环境中的物体变换方面起着重要作用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验