Burgess R, Culpin I, Costantini I, Bould H, Nabney I, Pearson R M
The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United Kingdom.
The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom.
Front Psychol. 2023 Jul 31;14:1223806. doi: 10.3389/fpsyg.2023.1223806. eCollection 2023.
This work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding.
We used videos of parents (fathers, = 36; mothers, = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos-obtained during real-life parent-infant interactions in the home-were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software's detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy.
We found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers' faces were more important for predicting Positive and Neutral expressions, whilst fathers' faces were more important in predicting Negative and Surprise expressions.
We discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.
本研究探讨使用一种自动面部编码软件——面部阅读器(FaceReader)——作为手动编码的替代和/或补充方法。
我们使用了取自阿冯亲子纵向研究(Avon Longitudinal Study of Parents and Children)的父母(父亲,n = 36;母亲,n = 29)的视频。这些在家庭中实际亲子互动期间获得的视频,既通过手动方式(使用现有的编码方案)进行编码,也由面部阅读器进行编码。在使用列联表检查软件的检测率并量化手动编码与自动编码之间的一致性之前,我们确定了手动编码和自动编码类别之间的对应关系,即积极、中性、消极和惊讶。通过二元逻辑回归,我们检验了面部阅读器输出在确定手动分类面部表情方面的预测潜力。使用一个交互项来研究性别对我们模型的影响,试图估计其对预测准确性的影响。
我们发现,与手动编码相比,自动面部检测率较低(父亲为25.2%,母亲为24.6%),并讨论了对此的一些潜在解释(例如,光线不佳和面部遮挡)。我们的逻辑回归分析发现,惊讶和积极表情具有很强的预测能力,而消极表情的表现较差。母亲的面部对于预测积极和中性表情更为重要,而父亲的面部在预测消极和惊讶表情方面更为重要。
我们在未来自动面部编码研究的背景下讨论了我们研究结果的意义,并强调在自动面部编码研究中需要考虑性别特异性影响。