Suppr超能文献

使用非语音听觉刺激对成人和婴儿进行跨模态语音感知研究。

Cross-modal speech perception in adults and infants using nonspeech auditory stimuli.

作者信息

Kuhl P K, Williams K A, Meltzoff A N

机构信息

Department of Speech and Hearing Sciences, University of Washington, Seattle 98195.

出版信息

J Exp Psychol Hum Percept Perform. 1991 Aug;17(3):829-40. doi: 10.1037//0096-1523.17.3.829.

Abstract

Adults and infants were tested for the capacity to detect correspondences between nonspeech sounds and real vowels. The /i/ and /a/ vowels were presented in 3 different ways: auditory speech, silent visual faces articulating the vowels, or mentally imagined vowels. The nonspeech sounds were either pure tones or 3-tone complexes that isolated a single feature of the vowel without allowing the vowel to be identified. Adults perceived an orderly relation between the nonspeech sounds and vowels. They matched high-pitched nonspeech sounds to /i/ vowels and low-pitched nonspeech sounds to /a/ vowels. In contrast, infants could not match nonspeech sounds to the visually presented vowels. Infants' detection of correspondence between auditory and visual speech appears to require the whole speech signal; with development, an isolated feature of the vowel is sufficient for detection of the cross-modal correspondence.

摘要

对成人和婴儿检测非语音声音与真实元音之间对应关系的能力进行了测试。/i/和/a/元音以三种不同方式呈现:听觉语音、无声的口型发出元音的视觉面部,或在脑海中想象的元音。非语音声音要么是纯音,要么是分离出元音单一特征而不允许识别元音的三音复合音。成人感知到非语音声音与元音之间的有序关系。他们将高音调的非语音声音与/i/元音匹配,将低音调的非语音声音与/a/元音匹配。相比之下,婴儿无法将非语音声音与视觉呈现的元音进行匹配。婴儿对听觉和视觉语音之间对应关系的检测似乎需要整个语音信号;随着发育,元音的一个孤立特征就足以检测跨模态对应关系。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验