Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands.
Sci Rep. 2021 Apr 15;11(1):8287. doi: 10.1038/s41598-021-87881-w.
Emotional facial expressions are important visual communication signals that indicate a sender's intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.
情绪面部表情是重要的视觉通讯信号,向观察者表明发送者的意图和情绪状态。因此,人们认为对不同表情的反应是自动的且不依赖于意识,这并不奇怪。令人惊讶的是,研究表明,对于这种自动反应的结果并不一致,尤其是在使用不同的面部刺激时。我们认为,情绪表达的自动反应可以更好地用其低水平图像特征的定量描述来解释和理解,而不是用表情的情绪内容(例如愤怒)来解释。在这里,我们专注于整体空间频率 (SF) 和局部方向梯度直方图 (HOG) 特征。我们使用机器学习分类来揭示 SF 和 HOG 特征,这些特征足以对同时呈现的两张脸中的一张进行初始眼动的分类。有趣的是,所确定的特征比表情的情绪内容更能作为预测指标。因此,我们提出,我们的建模方法可以进一步指定哪些视觉特征驱动这些和其他与情绪表达相关的行为效应,这有助于解决这一研究领域的不一致性。