Suppr超能文献

知觉多稳态中的双峰式即时即时耦合。

Bimodal moment-by-moment coupling in perceptual multistability.

机构信息

Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.

Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.

出版信息

J Vis. 2024 May 1;24(5):16. doi: 10.1167/jov.24.5.16.

Abstract

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.

摘要

多稳态感知存在于所有感觉模态中,关于是否存在驱动跨感觉模态多稳态的总体机制,目前理论上仍存在争议。在这里,我们研究了在瞬间基础上,视觉和听觉是否存在多稳态知觉的耦合。为了在不引起双重任务情况的情况下同时评估两种模态的感知,我们通过直接报告来查询听觉感知,同时通过眼动间接测量视觉感知。基于支持向量机 (SVM) 的分类器允许我们从眼动追踪数据中逐点解码视觉感知。对于每个时间点,我们比较视觉感知 (SVM 输出) 和听觉感知 (报告),并量化两种模态中整合 (一个物体) 或分离 (两个物体) 解释的共同出现。我们的结果显示听觉和视觉感知解释存在高于机会的耦合。通过针对每个模态和个体的整合和分离感知的近似对称分布来滴定刺激参数,我们将随机预期的耦合量最小化。由于我们任务的性质,我们可以排除耦合源于知觉后水平(即决策或反应干扰)。因此,我们的结果表明,在视觉和听觉多稳态的分辨率中存在瞬间的知觉耦合,这支持了跨感觉的多稳态感知存在联合机制的理论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/13e8/11146044/663e4121210e/jovi-24-5-16-f001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验