Corina D P, Bellugi U, Reilly J
Department of Psychology, University of Washington, Seattle 98195, USA.
Lang Speech. 1999 Apr-Sep;42 ( Pt 2-3):307-31. doi: 10.1177/00238309990420020801.
For deaf users of American Sign Language (ASL), facial behaviors function in two distinct ways: to convey affect (as with spoken languages) and to mark certain specific grammatical structures (e.g., relative clauses), thus subserving distinctly linguistic functions in ways that are unique to signed languages. The existence of two functionally different classes of facial behaviors raises questions concerning neural control of language and nonlanguage functions. Examining patterns of neural mediation for differential functions of facial expressions, linguistic versus affective, provides a unique perspective on the determinants of hemispheric specialization. This paper presents two studies which explore facial expression production in deaf signers. An experimental paradigm uses chimeric stimuli of ASL linguistic and affective facial expressions (photographs of right vs. left composites of posed expressions) to explore patterns of productive asymmetries in brain-intact signers. A second study examines facial expression production in left and right brain lesioned deaf signers, specifying unique patterns of spared and impaired functions. Both studies show striking differences between affective and linguistic facial expressions. The data indicate that for deaf signing individuals, affective expressions appear to be primarily mediated by the right-hemisphere. In contrast, these studies provide evidence that linguistic facial expressions involve left hemisphere mediation. This represents an important finding, since one and the same muscular system is involved in two functionally distinct types of facial expressions. For hearing persons, the right-hemisphere may be predominant in affective facial expression, but for deaf signers, hemispheric specialization for facial signals is influenced by the purposes those signals serve. Taken together, the data provide important new insights into the determinants of the specialization of the cerebral hemispheres in humans.
对于美国手语(ASL)的聋人使用者而言,面部行为以两种不同方式发挥作用:传达情感(与口语语言一样)以及标记某些特定语法结构(例如关系从句),从而以手语特有的方式实现明显的语言功能。存在两种功能不同的面部行为类别,这引发了关于语言和非语言功能神经控制的问题。研究面部表情在语言功能与情感功能方面不同功能的神经调节模式,为半球特化的决定因素提供了独特视角。本文介绍了两项探索聋人手语使用者面部表情产生的研究。一种实验范式使用美国手语语言和情感面部表情的嵌合刺激(摆姿势表情的右半部分与左半部分合成的照片),以探究大脑健全的手语使用者产生不对称的模式。第二项研究考察了左脑和右脑受损的聋人手语使用者的面部表情产生情况,明确了功能保留和受损的独特模式。两项研究均显示情感面部表情和语言面部表情之间存在显著差异。数据表明,对于聋人手语使用者,情感表情似乎主要由右半球调节。相比之下,这些研究提供的证据表明,语言面部表情涉及左半球调节。这是一项重要发现,因为同一肌肉系统参与了两种功能不同的面部表情类型。对于听力正常的人来说,右半球在情感面部表情方面可能占主导,但对于聋人手语使用者,面部信号的半球特化受这些信号所服务目的的影响。综合来看,这些数据为人类大脑半球特化的决定因素提供了重要的新见解。