Suppr超能文献

训练可提高听众利用视觉信息进行听觉场景分析的能力。

Training enhances the ability of listeners to exploit visual information for auditory scene analysis.

作者信息

Atilgan Huriye, Bizley Jennifer K

机构信息

The Ear Institute, University College London, UK.

The Ear Institute, University College London, UK.

出版信息

Cognition. 2021 Mar;208:104529. doi: 10.1016/j.cognition.2020.104529. Epub 2020 Dec 26.

Abstract

The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.

摘要

利用跨模态线索之间的时间关系的能力有助于感知和行为。此前我们观察到,视觉刺激大小和听觉刺激强度在时间上的相关变化会影响听众执行听觉选择性注意任务的能力(马多克斯、阿蒂尔甘、比兹利和李,2015年)。参与者在忽略同时呈现的掩蔽音中的音色变化的同时,检测目标声音中的音色变化。当视觉刺激与目标声音在时间上一致时,尽管视觉刺激不传达与任务相关的信息,但表现明显优于视觉刺激与掩蔽音在时间上一致时。在这里,我们训练观察者检测视听时间一致性,并询问这是否改变了他们在听觉选择性注意任务中利用视觉信息的方式。我们观察到,训练后,相对于视觉刺激与两种声音都不一致的情况,参与者能够从视觉刺激与目标和掩蔽音流之间的时间一致性中受益。然而,我们在另一组接受训练以区分时间上一致的视听流之间的调制率差异的参与者中没有观察到这种变化,尽管他们的整体表现有所改善。一个对照组在测试前和测试后没有改变他们的表现,也没有改变他们利用视觉信息的方式。这些结果为跨模态体验如何优化多感官整合提供了见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/df9e/7868888/f926c37252ae/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验