Graduate School of Arts and Sciences, the University of Tokyo, Tokyo, Japan.
PLoS One. 2013;8(2):e57325. doi: 10.1371/journal.pone.0057325. Epub 2013 Feb 22.
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.
先前的研究表明,事件相关电位(ERP)的早期后部成分受到面部表情的调节。本研究的目的是通过考察 ERP 成分与面部表情识别之间的关系,来研究个体对面部表情识别的差异。在 ERP 记录期间,向 36 名年轻成年人展示了 3 张面部表情(愤怒、快乐和中性)的图片。参与者被要求一识别出所描绘的表情,就立即用按钮做出反应。多元回归分析中,将 ERP 成分作为预测变量,以面部表情的反应准确性和反应时间作为因变量。N170 振幅显著预测了愤怒和快乐表情的准确性,而 N170 潜伏期则预测了中性表情的准确性。P2 振幅显著预测了反应时间。P2 潜伏期仅对中性面孔的反应时间有显著预测作用。这些结果表明,个体对面部表情识别的差异源于视觉处理的早期成分。