Suppr超能文献

分散注意条件下噪声语音的感知学习。

Perceptual Learning of Noise-Vocoded Speech Under Divided Attention.

机构信息

Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.

出版信息

Trends Hear. 2023 Jan-Dec;27:23312165231192297. doi: 10.1177/23312165231192297.

Abstract

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments ( = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.

摘要

言语感知表现可通过练习或接触得到改善。这种感知学习被认为依赖于注意力,而预测编码框架等理论观点则表明注意力在支持学习方面起着关键作用。然而,言语感知学习是否需要注意力集中仍不清楚。我们通过两个在线实验(n=336)来评估注意力分散在言语感知学习中的作用。实验 1 检验了感知学习对注意力集中的依赖。参与者在组间设计中完成了一个重复四十个噪声语音编码句子的语音识别任务。参与者在三个难度级别之一上单独或同时执行语音任务和一个领域通用的视觉任务(双重任务)。我们观察到,在所有四个组中,在注意力分散的情况下都存在感知学习,且受到双重任务难度的调节。在容易和中等视觉条件下的听众与单项任务组一样有进步。那些完成最具挑战性视觉任务的人表现出更快的学习速度,并且与单项任务组相比,达到了类似的最终表现。实验 2 测试了学习是否依赖于特定领域或通用领域的过程。参与者完成了单一的语音任务,或者在完成此任务的同时完成了旨在招募特定领域(词汇或语音)或通用领域(视觉)过程的双重任务。所有次要任务条件都产生了与单项语音任务相当的学习模式和数量。我们的结果表明,注意力分散对感知学习的影响并不严格依赖于通用领域或特定领域的过程,并且在注意力分散的情况下,言语感知学习仍然存在。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/adbfa36665cc/10.1177_23312165231192297-fig1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验