Suppr超能文献

在句子刺激的唇读训练中,反馈控制着在噪声中视听语音的学习和泛化。

During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.

机构信息

Department of Speech, Language, and Hearing Sciences, George Washington University, DC.

出版信息

Am J Audiol. 2022 Mar 3;31(1):57-77. doi: 10.1044/2021_AJA-21-00034. Epub 2021 Dec 29.

Abstract

PURPOSE

This study investigated the effects of external feedback on perceptual learning of visual speech during lipreading training with sentence stimuli. The goal was to improve visual-only (VO) speech recognition and increase accuracy of audiovisual (AV) speech recognition in noise. The rationale was that spoken word recognition depends on the accuracy of sublexical (phonemic/phonetic) speech perception; effective feedback during training must support sublexical perceptual learning.

METHOD

Normal-hearing (NH) adults were assigned to one of three types of feedback: Sentence feedback was the entire sentence printed after responding to the stimulus. Word feedback was the correct response words and perceptually near but incorrect response words. Consonant feedback was correct response words and consonants in incorrect but perceptually near response words. Six training sessions were given. Pre- and posttraining testing included an untrained control group. Test stimuli were disyllable nonsense words for forced-choice consonant identification, and isolated words and sentences for open-set identification. Words and sentences were VO, AV, and audio-only (AO) with the audio in speech-shaped noise.

RESULTS

Lipreading accuracy increased during training. Pre- and posttraining tests of consonant identification showed no improvement beyond test-retest increases obtained by untrained controls. Isolated word recognition with a talker not seen during training showed that the control group improved more than the sentence group. Tests of untrained sentences showed that the consonant group significantly improved in all of the stimulus conditions (VO, AO, and AV). Its mean words correct scores increased by 9.2 percentage points for VO, 3.4 percentage points for AO, and 9.8 percentage points for AV stimuli.

CONCLUSIONS

Consonant feedback during training with sentences stimuli significantly increased perceptual learning. The training generalized to untrained VO, AO, and AV sentence stimuli. Lipreading training has potential to significantly improve adults' face-to-face communication in noisy settings in which the talker can be seen.

摘要

目的

本研究通过句子刺激的唇读训练,调查了外部反馈对视觉言语知觉学习的影响。目的是提高仅视觉(VO)语音识别,并提高视听(AV)语音在噪声中的识别准确性。其原理是,单词识别取决于亚词汇(音位/语音)语音感知的准确性;训练过程中有效的反馈必须支持亚词汇感知学习。

方法

正常听力(NH)成年人被分配到三种反馈类型之一:句子反馈是在响应刺激后打印的整个句子。单词反馈是正确的响应词和感知上接近但不正确的响应词。辅音反馈是正确的响应词和不正确但感知上接近的响应词中的辅音。进行了六次培训课程。培训前后的测试包括未受过训练的对照组。测试刺激物是用于强制选择辅音识别的双音节无意义词,以及用于开放式识别的孤立词和句子。单词和句子为 VO、AV 和仅音频(AO),音频为语音噪声。

结果

唇读准确性在训练过程中提高。辅音识别的预训练和后训练测试表明,除了未经训练的对照组通过测试 - 再测试获得的提高之外,没有任何改善。与训练期间未看到的说话者进行的孤立单词识别表明,对照组的改善程度超过了句子组。对未经训练的句子的测试表明,辅音组在所有刺激条件(VO、AO 和 AV)下都显著提高。其平均单词正确分数增加了 9.2 个百分点,VO 增加了 3.4 个百分点,AO 增加了 9.8 个百分点,AV 增加了 9.8 个百分点。

结论

在句子刺激的训练中使用辅音反馈显著增加了感知学习。培训推广到未经训练的 VO、AO 和 AV 句子刺激。唇读训练有可能显著提高成年人在可观察说话者的嘈杂环境中的面对面交流能力。

相似文献

1
During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.
Am J Audiol. 2022 Mar 3;31(1):57-77. doi: 10.1044/2021_AJA-21-00034. Epub 2021 Dec 29.
4
Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.
Front Neurosci. 2013 Mar 18;7:34. doi: 10.3389/fnins.2013.00034. eCollection 2013.

引用本文的文献

本文引用的文献

4
Clinical Effectiveness of an At-Home Auditory Training Program: A Randomized Controlled Trial.
Ear Hear. 2019 Sep/Oct;40(5):1043-1060. doi: 10.1097/AUD.0000000000000688.
5
Visual Perceptual Learning and Models.
Annu Rev Vis Sci. 2017 Sep 15;3:343-363. doi: 10.1146/annurev-vision-102016-061249. Epub 2017 Jul 19.
6
Decoding the Cortical Dynamics of Sound-Meaning Mapping.
J Neurosci. 2017 Feb 1;37(5):1312-1319. doi: 10.1523/JNEUROSCI.2858-16.2016. Epub 2016 Dec 27.
7
The role of feedback contingency in perceptual category learning.
J Exp Psychol Learn Mem Cogn. 2016 Nov;42(11):1731-1746. doi: 10.1037/xlm0000277. Epub 2016 May 5.
8
Visual speech discrimination and identification of natural and synthetic consonant stimuli.
Front Psychol. 2015 Jul 13;6:878. doi: 10.3389/fpsyg.2015.00878. eCollection 2015.
10
Speech perception in older hearing impaired listeners: benefits of perceptual training.
PLoS One. 2015 Mar 2;10(3):e0113965. doi: 10.1371/journal.pone.0113965. eCollection 2015.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验