Suppr超能文献

听觉编码语音与唇读语音的特定模态感知学习:先验信息的不同影响

Modality-Specific Perceptual Learning of Vocoded Auditory versus Lipread Speech: Different Effects of Prior Information.

作者信息

Bernstein Lynne E, Auer Edward T, Eberhardt Silvio P

机构信息

Speech, Language, and Hearing Sciences Department, George Washington University, Washington, DC 20052, USA.

出版信息

Brain Sci. 2023 Jun 29;13(7):1008. doi: 10.3390/brainsci13071008.

Abstract

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

摘要

传统上,言语感知训练范式并未充分考虑到仅通过听觉(AO)与仅通过视觉(VO)言语刺激进行感知学习可能存在特定模态要求的可能性。本文报道的研究调查了以下假设:在对正常听力参与者进行声码化言语与视觉言语训练期间,他们使用先验信息的方式存在特定模态差异。两项不同的实验,一项使用声码化听觉言语(实验1),另一项使用视觉言语(唇读)(实验2),研究了在训练过程中每次试验向受训者提供不同类型信息的效果。训练为期四个约20分钟的时段,在此期间参与者学习使用新的口语单词为新的视觉图像贴标签。在训练期间,参与者被分配到不同类型的先验信息:单词组受训者看到每个训练单词的印刷版本(例如,“tethon”),而辅音组受训者只看到其辅音(例如,“t_th_n”)。其他组未接受先验信息(即实验1中的听觉组;实验2中的视觉组),或者接受来自与训练刺激不同模态的刺激口语版本(实验1中的唇读组;实验2中的声码器组)。也就是说,在每个实验中,都有一组接受来自另一个实验的训练刺激模态的先验信息。在两项实验中,单词组在保留他们在训练期间试图学习的新单词方面都存在困难。然而,当训练刺激是声码化的时候,单词组提高了他们的音素识别能力。当训练刺激是视觉言语时,辅音组提高了他们的音素识别能力以及他们的开放式句子唇读能力。根据与感知模态相关的感知学习理论解释对结果进行了考量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验