Suppr超能文献

视听运动线索是否促进听觉流的分离?

Do audio-visual motion cues promote segregation of auditory streams?

机构信息

Pavlov Institute of Physiology, Russian Academy of Sciences St.-Petersburg, Russia.

Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences Budapest, Hungary ; Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics Budapest, Hungary.

出版信息

Front Neurosci. 2014 Apr 7;8:64. doi: 10.3389/fnins.2014.00064. eCollection 2014.

Abstract

An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities.

摘要

一项使用移动声源的视听实验旨在研究听觉场景的分析是否受到视觉信息同步呈现的调节。实验中,被试者被呈现两个由两个单独声源发出的纯音的交替序列。在不同的条件下,两个声源要么静止,要么在围绕被试者的随机轨迹上移动。声音和运动轨迹都来自于两个人戴着扬声器移动的录音。计算机动画呈现的视觉化运动轨迹与声音一起呈现。在主要实验中,从年轻健康的志愿者那里收集了关于声音组织的行为报告。比较了视觉轨迹与声源运动匹配的条件和两者相互独立的条件下不同声音组织的比例和稳定性。研究结果证实了先前的发现,即声源在空间上的分离促进了分离。然而,并没有得到听觉运动本身对声音知觉组织的额外影响。令人惊讶的是,呈现与运动一致的视觉提示并没有增强空间分离对听觉流分离的影响。我们的发现与这样一种观点一致,即听觉模态的双稳态可以独立于其他模态发生。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6b74/3985028/22b1db6d8148/fnins-08-00064-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验