Research Center for Mind, Brain and Learning, National Chengchi University, Taipei, Taiwan (Republic of China).
Emotion. 2013 Jun;13(3):573-86. doi: 10.1037/a0027285. Epub 2012 Apr 16.
Facial expressions are highly dynamic signals that are rarely categorized as static, isolated displays. However, the role of sequential context in facial expression categorization is poorly understood. This study examines the fine temporal structure of expression-based categorization on a trial-to-trial basis as participants categorized a sequence of facial expressions. The results showed that the local sequential context provided by preceding facial expressions could bias the categorical judgments of current facial expressions. Two types of categorization biases were found: (a) Assimilation effects-current expressions were categorized as close to the category of the preceding expressions, and (b) contrast effects-current expressions were categorized as away from the category of the preceding expressions. The effects of such categorization biases were modulated by the relative distance between the preceding and current expressions, as well as by the different experimental contexts, possibly including the factors of face identity and the range effect. Thus, the present study suggests that facial expression categorization is not a static process. Rather, the temporal relation between the preceding and current expressions could inform categorization, revealing a more dynamic and adaptive aspect of facial expression processing.
面部表情是高度动态的信号,很少被归类为静态、孤立的展示。然而,序列上下文在面部表情分类中的作用还不太清楚。本研究在参与者对一系列面部表情进行分类的基础上,逐次考察了基于表情的分类的精细时间结构。结果表明,先前面部表情提供的局部序列上下文可能会使当前面部表情的类别判断产生偏差。发现了两种类型的分类偏差:(a)同化效应——当前表情被归类为接近前一个表情的类别,和(b)对比效应——当前表情被归类为远离前一个表情的类别。这种分类偏差的影响受到先前和当前表情之间相对距离的调制,以及不同的实验上下文的调制,可能包括面孔身份和范围效应的因素。因此,本研究表明,面部表情分类不是一个静态的过程。相反,先前和当前表情之间的时间关系可以为分类提供信息,揭示了面部表情处理更具动态性和适应性的方面。