Suppr超能文献

多说话者感知学习需要额外的接触。

Perceptual learning of multiple talkers requires additional exposure.

机构信息

Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.

The Connecticut Institute for the Brain and Cognitive Sciences, Storrs, CT, USA.

出版信息

Atten Percept Psychophys. 2021 Jul;83(5):2217-2228. doi: 10.3758/s13414-021-02261-w. Epub 2021 Mar 22.

Abstract

Because different talkers produce their speech sounds differently, listeners benefit from maintaining distinct generative models (sets of beliefs) about the correspondence between acoustic information and phonetic categories for different talkers. A robust literature on phonetic recalibration indicates that when listeners encounter a talker who produces their speech sounds idiosyncratically (e.g., a talker who produces their /s/ sound atypically), they can update their generative model for that talker. Such recalibration has been shown to occur in a relatively talker-specific way. Because listeners in ecological situations often meet several new talkers at once, the present study considered how the process of simultaneously updating two distinct generative models compares to updating one model at a time. Listeners were exposed to two talkers, one who produced /s/ atypically and one who produced /∫/ atypically. Critically, these talkers only produced these sounds in contexts where lexical information disambiguated the phoneme's identity (e.g., epi_ode, flouri_ing). When initial exposure to the two talkers was blocked by voice (Experiment 1), listeners recalibrated to these talkers after relatively little exposure to each talker (32 instances per talker, of which 16 contained ambiguous fricatives). However, when the talkers were intermixed during learning (Experiment 2), listeners required more exposure trials before they were able to adapt to the idiosyncratic productions of these talkers (64 instances per talker, of which 32 contained ambiguous fricatives). Results suggest that there is a perceptual cost to simultaneously updating multiple distinct generative models, potentially because listeners must first select which generative model to update.

摘要

由于不同的说话者发出的语音不同,因此听者受益于为不同的说话者保持独特的生成模型(信念集),这种生成模型反映了声学信息与语音类别之间的对应关系。关于语音再校准的大量文献表明,当听者遇到一个以特殊方式发出其语音的说话者(例如,一个以非典型方式发出其/s/音的说话者)时,他们可以更新其针对该说话者的生成模型。这种再校准是以相对特定于说话者的方式发生的。由于在生态情境中,听者通常会同时遇到几个新的说话者,因此本研究考虑了同时更新两个不同的生成模型的过程与一次更新一个模型的过程相比如何。听者接触到两个说话者,一个说话者以非典型方式发出/s/音,另一个说话者以非典型方式发出/∫/音。至关重要的是,这些说话者仅在词汇信息可以消除音位身份歧义的上下文中发出这些音(例如,epi_ode,flouri_ing)。当对两个说话者的初始接触被声音阻断时(实验 1),听者在相对较少地接触每个说话者(每个说话者 32 次,其中 16 次包含模糊的擦音)后就可以对这些说话者进行重新校准。但是,当说话者在学习过程中混合时(实验 2),听者需要更多的接触试验才能适应这些说话者的特殊发音(每个说话者 64 次,其中 32 次包含模糊的擦音)。结果表明,同时更新多个独特的生成模型存在感知成本,这可能是因为听者必须首先选择要更新的生成模型。

相似文献

1
Perceptual learning of multiple talkers requires additional exposure.
Atten Percept Psychophys. 2021 Jul;83(5):2217-2228. doi: 10.3758/s13414-021-02261-w. Epub 2021 Mar 22.
3
Lexical Information Guides Retuning of Neural Patterns in Perceptual Learning for Speech.
J Cogn Neurosci. 2020 Oct;32(10):2001-2012. doi: 10.1162/jocn_a_01612. Epub 2020 Jul 14.
4
The Black Book of Psychotropic Dosing and Monitoring.
Psychopharmacol Bull. 2024 Jul 8;54(3):8-59.
6
Autistic Students' Experiences of Employment and Employability Support while Studying at a UK University.
Autism Adulthood. 2025 Apr 3;7(2):212-222. doi: 10.1089/aut.2024.0112. eCollection 2025 Apr.
8
Perceptual learning of multiple talkers: Determinants, characteristics, and limitations.
Atten Percept Psychophys. 2022 Oct;84(7):2335-2359. doi: 10.3758/s13414-022-02556-6. Epub 2022 Sep 8.
9
Effectiveness of voice rehabilitation on vocalisation in postlaryngectomy patients: a systematic review.
Int J Evid Based Healthc. 2010 Dec;8(4):256-8. doi: 10.1111/j.1744-1609.2010.00177.x.

引用本文的文献

1
Limited learning and adaptation in disfluency processing among older adults.
Psychol Aging. 2025 Jun;40(4):439-447. doi: 10.1037/pag0000887. Epub 2025 Mar 20.
3
When Jack isn't Jacques: Simultaneous opposite language-specific speech perceptual learning in French-English bilinguals.
PNAS Nexus. 2024 Aug 23;3(9):pgae354. doi: 10.1093/pnasnexus/pgae354. eCollection 2024 Sep.
4
The Cerebellum Is Sensitive to the Lexical Properties of Words During Spoken Language Comprehension.
Neurobiol Lang (Camb). 2024 Aug 15;5(3):757-773. doi: 10.1162/nol_a_00126. eCollection 2024.
6
Right Posterior Temporal Cortex Supports Integration of Phonetic and Talker Information.
Neurobiol Lang (Camb). 2023 Mar 8;4(1):145-177. doi: 10.1162/nol_a_00091. eCollection 2023.
9
Reliability and validity for perceptual flexibility in speech.
Brain Lang. 2022 Mar;226:105070. doi: 10.1016/j.bandl.2021.105070. Epub 2022 Jan 10.
10
Listener expectations and the perceptual accommodation of talker variability: A pre-registered replication.
Atten Percept Psychophys. 2021 Aug;83(6):2367-2376. doi: 10.3758/s13414-021-02317-x. Epub 2021 May 4.

本文引用的文献

1
Boosting lexical support does not enhance lexically guided perceptual learning.
J Exp Psychol Learn Mem Cogn. 2021 Apr;47(4):685-704. doi: 10.1037/xlm0000945. Epub 2020 Oct 15.
2
Listeners are initially flexible in updating phonetic beliefs over time.
Psychon Bull Rev. 2021 Aug;28(4):1354-1364. doi: 10.3758/s13423-021-01885-1. Epub 2021 Mar 19.
3
A second chance for a first impression: Sensitivity to cumulative input statistics for lexically guided perceptual learning.
Psychon Bull Rev. 2021 Jun;28(3):1003-1014. doi: 10.3758/s13423-020-01840-6. Epub 2021 Jan 14.
5
Structure in talker variability: How much is there and how much can it help?
Lang Cogn Neurosci. 2018;34(1):43-68. doi: 10.1080/23273798.2018.1500698. Epub 2018 Jul 30.
6
Lexically guided perceptual learning is robust to task-based changes in listening strategy.
J Acoust Soc Am. 2018 Aug;144(2):1089. doi: 10.1121/1.5047672.
7
Speaker information affects false recognition of unstudied lexical-semantic associates.
Atten Percept Psychophys. 2018 May;80(4):894-912. doi: 10.3758/s13414-018-1485-z.
8
Headphone screening to facilitate web-based auditory experiments.
Atten Percept Psychophys. 2017 Oct;79(7):2064-2072. doi: 10.3758/s13414-017-1361-2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验