• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分散注意条件下噪声语音的感知学习。

Perceptual Learning of Noise-Vocoded Speech Under Divided Attention.

机构信息

Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.

出版信息

Trends Hear. 2023 Jan-Dec;27:23312165231192297. doi: 10.1177/23312165231192297.

DOI:10.1177/23312165231192297
PMID:37547940
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10408355/
Abstract

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments ( = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.

摘要

言语感知表现可通过练习或接触得到改善。这种感知学习被认为依赖于注意力,而预测编码框架等理论观点则表明注意力在支持学习方面起着关键作用。然而,言语感知学习是否需要注意力集中仍不清楚。我们通过两个在线实验(n=336)来评估注意力分散在言语感知学习中的作用。实验 1 检验了感知学习对注意力集中的依赖。参与者在组间设计中完成了一个重复四十个噪声语音编码句子的语音识别任务。参与者在三个难度级别之一上单独或同时执行语音任务和一个领域通用的视觉任务(双重任务)。我们观察到,在所有四个组中,在注意力分散的情况下都存在感知学习,且受到双重任务难度的调节。在容易和中等视觉条件下的听众与单项任务组一样有进步。那些完成最具挑战性视觉任务的人表现出更快的学习速度,并且与单项任务组相比,达到了类似的最终表现。实验 2 测试了学习是否依赖于特定领域或通用领域的过程。参与者完成了单一的语音任务,或者在完成此任务的同时完成了旨在招募特定领域(词汇或语音)或通用领域(视觉)过程的双重任务。所有次要任务条件都产生了与单项语音任务相当的学习模式和数量。我们的结果表明,注意力分散对感知学习的影响并不严格依赖于通用领域或特定领域的过程,并且在注意力分散的情况下,言语感知学习仍然存在。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/33564c3fed39/10.1177_23312165231192297-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/adbfa36665cc/10.1177_23312165231192297-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/87b7c39981c5/10.1177_23312165231192297-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/beb9e2a27088/10.1177_23312165231192297-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/b8eadb8cff3f/10.1177_23312165231192297-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/6e5a193428db/10.1177_23312165231192297-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/e18a3bb24c4b/10.1177_23312165231192297-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/777bd7228de7/10.1177_23312165231192297-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/f6c192a082e6/10.1177_23312165231192297-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/33564c3fed39/10.1177_23312165231192297-fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/adbfa36665cc/10.1177_23312165231192297-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/87b7c39981c5/10.1177_23312165231192297-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/beb9e2a27088/10.1177_23312165231192297-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/b8eadb8cff3f/10.1177_23312165231192297-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/6e5a193428db/10.1177_23312165231192297-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/e18a3bb24c4b/10.1177_23312165231192297-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/777bd7228de7/10.1177_23312165231192297-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/f6c192a082e6/10.1177_23312165231192297-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2447/10408355/33564c3fed39/10.1177_23312165231192297-fig9.jpg

相似文献

1
Perceptual Learning of Noise-Vocoded Speech Under Divided Attention.分散注意条件下噪声语音的感知学习。
Trends Hear. 2023 Jan-Dec;27:23312165231192297. doi: 10.1177/23312165231192297.
2
Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).听力正常儿童噪声-声码语音感知的一些神经认知关联:一项(研究的)复制与扩展 。 (注:原文括号部分不完整,翻译时保留原样)
Ear Hear. 2017 May/Jun;38(3):344-356. doi: 10.1097/AUD.0000000000000393.
3
Rapid perceptual learning of noise-vocoded speech requires attention.噪声语音编码的快速知觉学习需要注意力。
J Acoust Soc Am. 2012 Mar;131(3):EL236-42. doi: 10.1121/1.3685511.
4
Transfer of auditory perceptual learning with spectrally reduced speech to speech and nonspeech tasks: implications for cochlear implants.听觉感知学习的频谱减缩言语向言语和非言语任务的转移:对人工耳蜗的启示。
Ear Hear. 2009 Dec;30(6):662-74. doi: 10.1097/AUD.0b013e3181b9c92d.
5
Lexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentences.词汇信息驱动对失真语音的感知学习:来自对噪声声码句子理解的证据。
J Exp Psychol Gen. 2005 May;134(2):222-41. doi: 10.1037/0096-3445.134.2.222.
6
Learning and bilingualism in challenging listening conditions: How challenging can it be?在具有挑战性的听力条件下学习和双语:它能有多具有挑战性?
Cognition. 2022 May;222:105018. doi: 10.1016/j.cognition.2022.105018. Epub 2022 Jan 13.
7
Relationship between perceptual learning in speech and statistical learning in younger and older adults.言语知觉学习与年轻和老年成年人统计学习的关系。
Front Hum Neurosci. 2014 Sep 1;8:628. doi: 10.3389/fnhum.2014.00628. eCollection 2014.
8
Many tasks, same outcome: Role of training task on learning and maintenance of noise-vocoded speech.多项任务,相同结果:训练任务在噪声语音学习和维持中的作用。
J Acoust Soc Am. 2022 Aug;152(2):981. doi: 10.1121/10.0013507.
9
Sleep-Based Memory Consolidation Stabilizes Perceptual Learning of Noise-Vocoded Speech.基于睡眠的记忆巩固稳定了噪声编码语音的知觉学习。
J Speech Lang Hear Res. 2023 Feb 13;66(2):720-734. doi: 10.1044/2022_JSLHR-22-00139. Epub 2023 Jan 20.
10
Effects of training length on adaptation to noise-vocoded speech.训练时长对噪声编码语音适应的影响。
J Acoust Soc Am. 2024 Mar 1;155(3):2114-2127. doi: 10.1121/10.0025273.

引用本文的文献

1
Neural Processing of Noise-Vocoded Speech Under Divided Attention: An fMRI-Machine Learning Study.注意力分散情况下噪声-声码语音的神经处理:一项功能磁共振成像-机器学习研究
Hum Brain Mapp. 2025 Aug 1;46(11):e70312. doi: 10.1002/hbm.70312.
2
Perceptual adaptation to dysarthric speech is modulated by concurrent phonological processing: A dual task study.对构音障碍语音的感知适应受并发语音处理的调节:一项双重任务研究。
J Acoust Soc Am. 2025 Mar 1;157(3):1598-1611. doi: 10.1121/10.0035883.

本文引用的文献

1
The time course of adaptation to distorted speech.适应失真语音的时间进程。
J Acoust Soc Am. 2022 Apr;151(4):2636. doi: 10.1121/10.0010235.
2
Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech.眼动追踪与视听信息受损语音的感知适应
J Speech Lang Hear Res. 2021 Sep 14;64(9):3432-3445. doi: 10.1044/2021_JSLHR-21-00106. Epub 2021 Aug 31.
3
Feature-based attention enables robust, long-lasting location transfer in human perceptual learning.基于特征的注意力可实现人类感知学习中稳健、持久的位置迁移。
Sci Rep. 2021 Jul 6;11(1):13914. doi: 10.1038/s41598-021-93016-y.
4
The Relevance of the Availability of Visual Speech Cues During Adaptation to Noise-Vocoded Speech.在适应噪声语音编码的过程中,可视语音线索的可用性的相关性。
J Speech Lang Hear Res. 2021 Jul 16;64(7):2513-2528. doi: 10.1044/2021_JSLHR-20-00575. Epub 2021 Jun 23.
5
Explaining the effects of distractor statistics in visual search.解释视觉搜索中干扰项统计信息的影响。
J Vis. 2020 Dec 2;20(13):11. doi: 10.1167/jov.20.13.11.
6
Hearing-Impaired Listeners Show Reduced Attention to High-Frequency Information in the Presence of Low-Frequency Information.听力受损者在存在低频信息的情况下对高频信息的注意力降低。
Trends Hear. 2020 Jan-Dec;24:2331216520945516. doi: 10.1177/2331216520945516.
7
The relationship between talker acoustics, intelligibility, and effort in degraded listening conditions.在听力受损情况下,说话者声学特征、可懂度与发声努力之间的关系。
J Acoust Soc Am. 2020 May;147(5):3348. doi: 10.1121/10.0001212.
8
Cognitive mechanisms underpinning successful perception of different speech distortions.支撑成功感知不同言语失真的认知机制。
J Acoust Soc Am. 2020 Apr;147(4):2728. doi: 10.1121/10.0001160.
9
Between-language competition as a driving force in foreign language attrition.语言间竞争对外语磨蚀的驱动作用。
Cognition. 2020 May;198:104218. doi: 10.1016/j.cognition.2020.104218. Epub 2020 Mar 3.
10
Gorilla in our midst: An online behavioral experiment builder.潜伏在我们中间的大猩猩:一个在线行为实验构建器。
Behav Res Methods. 2020 Feb;52(1):388-407. doi: 10.3758/s13428-019-01237-x.