Soken N H, Pick A D
Institute of Child Development, University of Minnesota, Minneapolis 55455.
Child Dev. 1992 Aug;63(4):787-95.
2 studies were conducted to examine the roles of facial motion and temporal correspondences in the intermodal perception of happy and angry expressive events. 7-month-old infants saw 2 video facial expressions and heard a single vocal expression characteristic of one of the facial expressions. Infants saw either a normally lighted face (fully illuminated condition) or a moving dot display of a face (point light condition). In Study 1, one woman expressed the affects vocally, another woman expressed the affects facially, and what they said also differed. Infants in the point light condition showed a reliable preference for the affectively concordant displays, while infants in the fully illuminated condition showed no preference for the affectively concordant display. In a second study, the visual and vocal displays were produced by a single individual on one occasion and were presented to infants 5 sec out of synchrony. Infants in both conditions looked longer at the affectively concordant displays. The results of the 2 studies indicate that infants can discriminate happy and angry affective expressions on the basis of motion information, and that the temporal correspondences unifying these affective events may be affect-specific rhythms.
进行了两项研究,以检验面部运动和时间对应关系在快乐和愤怒表情事件的多模态感知中的作用。7个月大的婴儿观看了两个视频面部表情,并听到了与其中一个面部表情特征相符的单一声音表情。婴儿观看的要么是正常照明的面部(完全照亮条件),要么是面部的移动点显示(点光条件)。在研究1中,一名女性通过声音表达情感,另一名女性通过面部表达情感,而且她们所说的内容也不同。点光条件下的婴儿对情感一致的展示表现出可靠的偏好,而完全照亮条件下的婴儿对情感一致的展示没有偏好。在第二项研究中,视觉和声音展示由同一个人在一次场合中制作,并以相差5秒的时间呈现给婴儿。两种条件下的婴儿都更长时间地注视情感一致的展示。这两项研究的结果表明,婴儿能够根据运动信息区分快乐和愤怒的情感表达,并且统一这些情感事件的时间对应关系可能是特定情感的节奏。