Suppr超能文献

使用自动化计算机视觉和机器学习对情感和唤醒的面部表情进行编码:对情绪失调研究的启示。

Using automated computer vision and machine learning to code facial expressions of affect and arousal: Implications for emotion dysregulation research.

机构信息

Department of Psychology,Ohio State University,Columbus, OH,USA.

Department of Psychology,University of Utah,Salt Lake City, UT,USA.

出版信息

Dev Psychopathol. 2019 Aug;31(3):871-886. doi: 10.1017/S0954579419000312. Epub 2019 Mar 28.

Abstract

As early as infancy, caregivers' facial expressions shape children's behaviors, help them regulate their emotions, and encourage or dissuade their interpersonal agency. In childhood and adolescence, proficiencies in producing and decoding facial expressions promote social competence, whereas deficiencies characterize several forms of psychopathology. To date, however, studying facial expressions has been hampered by the labor-intensive, time-consuming nature of human coding. We describe a partial solution: automated facial expression coding (AFEC), which combines computer vision and machine learning to code facial expressions in real time. Although AFEC cannot capture the full complexity of human emotion, it codes positive affect, negative affect, and arousal-core Research Domain Criteria constructs-as accurately as humans, and it characterizes emotion dysregulation with greater specificity than other objective measures such as autonomic responding. We provide an example in which we use AFEC to evaluate emotion dynamics in mother-daughter dyads engaged in conflict. Among other findings, AFEC (a) shows convergent validity with a validated human coding scheme, (b) distinguishes among risk groups, and (c) detects developmental increases in positive dyadic affect correspondence as teen daughters age. Although more research is needed to realize the full potential of AFEC, findings demonstrate its current utility in research on emotion dysregulation.

摘要

早在婴儿期,照顾者的面部表情就塑造了孩子的行为,帮助他们调节情绪,并鼓励或劝阻他们的人际能动性。在儿童期和青少年期,熟练地产生和解读面部表情可以促进社交能力,而缺陷则表现为几种形式的精神病理学。然而,迄今为止,由于人类编码的劳动强度大、耗时,对面部表情的研究一直受到阻碍。我们描述了一个部分解决方案:自动化面部表情编码(AFEC),它结合了计算机视觉和机器学习,可以实时编码面部表情。虽然 AFEC 不能捕捉到人类情感的全部复杂性,但它可以像人类一样准确地编码积极情绪、消极情绪和唤醒核心的研究领域标准结构,并且比自主反应等其他客观测量更具体地描述情绪失调。我们提供了一个示例,我们使用 AFEC 来评估参与冲突的母女二元组中的情绪动态。在其他发现中,AFEC(a) 与经过验证的人类编码方案具有收敛效度,(b) 区分风险组,(c) 检测到青少年女儿年龄增长时积极二元情感对应关系的发展增加。尽管需要更多的研究来实现 AFEC 的全部潜力,但研究结果表明它在情绪失调研究中的当前效用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/01be/7319037/f204660870a2/nihms-1600644-f0001.jpg

相似文献

本文引用的文献

7
Multimodal Detection of Depression in Clinical Interviews.临床访谈中抑郁症的多模态检测
Proc ACM Int Conf Multimodal Interact. 2015 Nov;2015:307-310. doi: 10.1145/2818346.2820776.
8
Automated Audiovisual Depression Analysis.自动化视听抑郁分析
Curr Opin Psychol. 2015 Aug;4:75-79. doi: 10.1016/j.copsyc.2014.12.010.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验