Gandour Jack, Wong Donald, Lowe Mark, Dzemidzic Mario, Satthamnuwong Nakarin, Tong Yunxia, Li Xiaojian
Department of Audiology and Speech Sciences, Purdue University, West Lafaeyette, IN 47907-1353, USA.
J Cogn Neurosci. 2002 Oct 1;14(7):1076-87. doi: 10.1162/089892902320474526.
It remains a matter of controversy precisely what kind of neural mechanisms underlie functional asymmetries in speech processing. Whereas some studies support speech-specific circuits, others suggest that lateralization is dictated by relative computational demands of complex auditory signals in the spectral or time domains. To examine how the brain processes linguistically relevant spectral and temporal information, a functional magnetic resonance imaging study was conducted using Thai speech, in which spectral processing associated with lexical tones and temporal processing associated with vowel length can be differentiated. Ten Thai and 10 Chinese subjects were asked to perform discrimination judgments of pitch and timing patterns presented in the same auditory stimuli under two different conditions: speech (Thai) and nonspeech (hums). In the speech condition, tasks required judging Thai tones (T) and vowel length (VL); in the nonspeech condition, homologous pitch contours (P) and duration patterns (D). A remaining task required listening passively to nonspeech hums (L). Only the Thai group showed activation in the left inferior prefrontal cortex in speech minus nonspeech contrasts for spectral (T vs. P) and temporal (VL vs. D) cues. Thai and Chinese groups, however, exhibited similar fronto-parietal activation patterns in nonspeech hums minus passive listening contrasts for spectral (P vs. L) and temporal (D vs. L) cues. It appears that lower level specialization for acoustic cues in the spectral and temporal domains cannot be generalized to abstract higher order levels of phonological processing. Regardless of the neural mechanisms underlying low-level auditory processing, our findings clearly indicate that hemispheric specialization is sensitive to language-specific factors.
言语处理功能不对称背后究竟是何种神经机制,这仍是一个存在争议的问题。一些研究支持存在特定于言语的神经回路,而另一些研究则表明,大脑半球功能侧化是由频谱或时域中复杂听觉信号的相对计算需求所决定的。为了研究大脑如何处理与语言相关的频谱和时间信息,我们进行了一项功能性磁共振成像研究,使用泰语进行实验,在泰语中与词汇声调相关的频谱处理和与元音长度相关的时间处理是可以区分的。10名泰国受试者和10名中国受试者被要求在两种不同条件下对同一听觉刺激中呈现的音高和时间模式进行辨别判断:言语(泰语)和非言语(哼唱)。在言语条件下,任务要求判断泰语声调(T)和元音长度(VL);在非言语条件下,判断同源音高轮廓(P)和时长模式(D)。还有一项任务是被动聆听非言语哼唱(L)。只有泰国组在言语减去非言语的对比中,对于频谱(T与P)和时间(VL与D)线索,在左侧前额叶下回显示出激活。然而,在非言语哼唱减去被动聆听的对比中,对于频谱(P与L)和时间(D与L)线索,泰国组和中国组表现出相似的额顶叶激活模式。看来,在频谱和时间域中对声学线索的较低层次的专门化不能推广到语音处理的抽象高阶层次。无论低层次听觉处理背后的神经机制如何,我们的研究结果清楚地表明,半球专门化对语言特定因素敏感。