PhD Program in Speech, Language, Hearing Sciences, The City University of New York-Graduate School and University Center, 365 Fifth Avenue, New York, New York 10016, USA.
J Acoust Soc Am. 2011 Oct;130(4):EL226-31. doi: 10.1121/1.3630221.
Current speech perception models propose that relative perceptual difficulties with non-native segmental contrasts can be predicted from cross-language phonetic similarities. Japanese (J) listeners performed a categorical discrimination task in which nine contrasts (six adjacent height pairs, three front/back pairs) involving eight American (AE) vowels [iː, ɪ, ε, æː, ɑː, ʌ, ʊ, uː] in /hVbə/ disyllables were tested. The listeners also completed a perceptual assimilation task (categorization as J vowels with category goodness ratings). Perceptual assimilation patterns (quantified as categorization overlap scores) were highly predictive of discrimination accuracy (r(s)=0.93). Results suggested that J listeners used both spectral and temporal information in discriminating vowel contrasts.
当前的语音感知模型提出,对于非母语的音段对比的相对感知困难程度,可以通过跨语言的语音相似性来预测。日本(J)听众参与了一项范畴辨别任务,其中涉及 8 个美国(AE)元音[iː, ɪ, ε, æː, ɑː, ʌ, ʊ, uː]的 9 个对比(6 个相邻的高度对,3 个前/后对)在/hVbə/双音节词中进行测试。听众还完成了感知同化任务(归类为 J 元音,并给出类别良好度评分)。感知同化模式(以归类重叠分数量化)高度预测了辨别准确性(r(s)=0.93)。结果表明,J 听众在辨别元音对比时既使用了频谱信息也使用了时域信息。