• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

听觉编码语音与唇读语音的特定模态感知学习:先验信息的不同影响

Modality-Specific Perceptual Learning of Vocoded Auditory versus Lipread Speech: Different Effects of Prior Information.

作者信息

Bernstein Lynne E, Auer Edward T, Eberhardt Silvio P

机构信息

Speech, Language, and Hearing Sciences Department, George Washington University, Washington, DC 20052, USA.

出版信息

Brain Sci. 2023 Jun 29;13(7):1008. doi: 10.3390/brainsci13071008.

DOI:10.3390/brainsci13071008
PMID:37508940
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10377548/
Abstract

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

摘要

传统上,言语感知训练范式并未充分考虑到仅通过听觉(AO)与仅通过视觉(VO)言语刺激进行感知学习可能存在特定模态要求的可能性。本文报道的研究调查了以下假设:在对正常听力参与者进行声码化言语与视觉言语训练期间,他们使用先验信息的方式存在特定模态差异。两项不同的实验,一项使用声码化听觉言语(实验1),另一项使用视觉言语(唇读)(实验2),研究了在训练过程中每次试验向受训者提供不同类型信息的效果。训练为期四个约20分钟的时段,在此期间参与者学习使用新的口语单词为新的视觉图像贴标签。在训练期间,参与者被分配到不同类型的先验信息:单词组受训者看到每个训练单词的印刷版本(例如,“tethon”),而辅音组受训者只看到其辅音(例如,“t_th_n”)。其他组未接受先验信息(即实验1中的听觉组;实验2中的视觉组),或者接受来自与训练刺激不同模态的刺激口语版本(实验1中的唇读组;实验2中的声码器组)。也就是说,在每个实验中,都有一组接受来自另一个实验的训练刺激模态的先验信息。在两项实验中,单词组在保留他们在训练期间试图学习的新单词方面都存在困难。然而,当训练刺激是声码化的时候,单词组提高了他们的音素识别能力。当训练刺激是视觉言语时,辅音组提高了他们的音素识别能力以及他们的开放式句子唇读能力。根据与感知模态相关的感知学习理论解释对结果进行了考量。

相似文献

1
Modality-Specific Perceptual Learning of Vocoded Auditory versus Lipread Speech: Different Effects of Prior Information.听觉编码语音与唇读语音的特定模态感知学习:先验信息的不同影响
Brain Sci. 2023 Jun 29;13(7):1008. doi: 10.3390/brainsci13071008.
2
Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.多感官训练可以促进或阻碍对言语刺激的视觉感知学习:视觉-触觉训练与视觉-听觉训练。
Front Hum Neurosci. 2014 Oct 31;8:829. doi: 10.3389/fnhum.2014.00829. eCollection 2014.
3
During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.在句子刺激的唇读训练中,反馈控制着在噪声中视听语音的学习和泛化。
Am J Audiol. 2022 Mar 3;31(1):57-77. doi: 10.1044/2021_AJA-21-00034. Epub 2021 Dec 29.
4
Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.视听口语训练可以促进或阻碍仅听觉的感知学习:与正常听力成年人相比,后天植入人工耳蜗的语前聋成年人。
Front Psychol. 2014 Aug 26;5:934. doi: 10.3389/fpsyg.2014.00934. eCollection 2014.
5
Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.听觉感知学习可以通过视听训练得到增强,从而提高言语感知能力。
Front Neurosci. 2013 Mar 18;7:34. doi: 10.3389/fnins.2013.00034. eCollection 2013.
6
Perceptual Doping: An Audiovisual Facilitation Effect on Auditory Speech Processing, From Phonetic Feature Extraction to Sentence Identification in Noise.知觉兴奋剂:对听觉言语处理的视听促进作用,从语音特征提取到噪声中的句子识别。
Ear Hear. 2019 Mar/Apr;40(2):312-327. doi: 10.1097/AUD.0000000000000616.
7
Modality Effects on Lexical Encoding and Memory Representations of Spoken Words.模态效应对口语词汇的词汇编码和记忆表征的影响。
Ear Hear. 2020 Jul/Aug;41(4):825-837. doi: 10.1097/AUD.0000000000000801.
8
Transfer of auditory perceptual learning with spectrally reduced speech to speech and nonspeech tasks: implications for cochlear implants.听觉感知学习的频谱减缩言语向言语和非言语任务的转移:对人工耳蜗的启示。
Ear Hear. 2009 Dec;30(6):662-74. doi: 10.1097/AUD.0b013e3181b9c92d.
9
Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training.唇读:对获得性听力损失语音识别的持续重要性的综述及有效的训练可能性。
Am J Audiol. 2022 Jun 2;31(2):453-469. doi: 10.1044/2021_AJA-21-00112. Epub 2022 Mar 22.
10
Comparison of word-, sentence-, and phoneme-based training strategies in improving the perception of spectrally distorted speech.基于单词、句子和音素的训练策略在改善频谱失真语音感知方面的比较。
J Speech Lang Hear Res. 2008 Apr;51(2):526-38. doi: 10.1044/1092-4388(2008/038).

引用本文的文献

1
Advances in Understanding the Phenomena and Processing in Audiovisual Speech Perception.视听言语感知中现象与处理的理解进展
Brain Sci. 2023 Sep 20;13(9):1345. doi: 10.3390/brainsci13091345.

本文引用的文献

1
A representation of abstract linguistic categories in the visual system underlies successful lipreading.视觉系统中抽象语言类别的表示是成功唇读的基础。
Neuroimage. 2023 Nov 15;282:120391. doi: 10.1016/j.neuroimage.2023.120391. Epub 2023 Sep 25.
2
Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training.唇读:对获得性听力损失语音识别的持续重要性的综述及有效的训练可能性。
Am J Audiol. 2022 Jun 2;31(2):453-469. doi: 10.1044/2021_AJA-21-00112. Epub 2022 Mar 22.
3
During Lipreading Training With Sentence Stimuli, Feedback Controls Learning and Generalization to Audiovisual Speech in Noise.
在句子刺激的唇读训练中,反馈控制着在噪声中视听语音的学习和泛化。
Am J Audiol. 2022 Mar 3;31(1):57-77. doi: 10.1044/2021_AJA-21-00034. Epub 2021 Dec 29.
4
Auditory and auditory-visual frequency-band importance functions for consonant recognition.用于辅音识别的听觉及视听频段重要性函数
J Acoust Soc Am. 2020 May;147(5):3712. doi: 10.1121/10.0001301.
5
Face viewing behavior predicts multisensory gain during speech perception.面部观察行为预测言语感知中的多感觉增益。
Psychon Bull Rev. 2020 Feb;27(1):70-77. doi: 10.3758/s13423-019-01665-y.
6
Age, Hearing, and the Perceptual Learning of Rapid Speech.年龄、听力与快速言语知觉学习。
Trends Hear. 2018 Jan-Dec;22:2331216518778651. doi: 10.1177/2331216518778651.
7
Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.颞上沟中的听觉、视觉及视听言语处理通路
Front Hum Neurosci. 2017 Apr 7;11:174. doi: 10.3389/fnhum.2017.00174. eCollection 2017.
8
Decoding the Cortical Dynamics of Sound-Meaning Mapping.解码声音-意义映射的皮层动力学
J Neurosci. 2017 Feb 1;37(5):1312-1319. doi: 10.1523/JNEUROSCI.2858-16.2016. Epub 2016 Dec 27.
9
The role of feedback contingency in perceptual category learning.反馈偶然性在知觉类别学习中的作用。
J Exp Psychol Learn Mem Cogn. 2016 Nov;42(11):1731-1746. doi: 10.1037/xlm0000277. Epub 2016 May 5.
10
Perceptual learning of degraded speech by minimizing prediction error.通过最小化预测误差进行退化语音的知觉学习。
Proc Natl Acad Sci U S A. 2016 Mar 22;113(12):E1747-56. doi: 10.1073/pnas.1523266113. Epub 2016 Mar 8.