Suppr超能文献

赛耶特集团组建任务(GFT)自发面部表情数据库。

Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.

作者信息

Girard Jeffrey M, Chu Wen-Sheng, Jeni László A, Cohn Jeffrey F, De la Torre Fernando, Sayette Michael A

机构信息

Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260.

Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213.

出版信息

Proc Int Conf Autom Face Gesture Recognit. 2017 May-Jun;2017:581-588. doi: 10.1109/FG.2017.144. Epub 2017 Jun 29.

Abstract

Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.

摘要

尽管面部表情在人际交流中发挥着重要作用,而且我们也知道人际行为会受到社会背景的影响,但目前可用的面部表情数据库中都不包含多个相互作用的参与者。赛耶特群体形成任务(GFT)数据库满足了对无脚本互动中多个参与者的经过充分注释的视频的需求。该数据库包含来自32个三人小组中96名参与者的172,800个视频帧。为了帮助开发自动面部表情分析系统,GFT包括对面部动作编码系统(FACS)出现情况和强度的专家注释、面部地标跟踪,以及线性支持向量机、深度学习、主动补丁学习和个性化分类的基线结果。使用相同的划分和各种指标(包括均值和置信区间)对基线性能进行量化和比较。深度学习和主动补丁学习方法的性能得分最高。详情请访问http://osf.io/7wcyz。

相似文献

1
Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.
Proc Int Conf Autom Face Gesture Recognit. 2017 May-Jun;2017:581-588. doi: 10.1109/FG.2017.144. Epub 2017 Jun 29.
2
Joint Patch and Multi-label Learning for Facial Action Unit and Holistic Expression Recognition.
IEEE Trans Image Process. 2016 Aug;25(8):3931-3946. doi: 10.1109/TIP.2016.2570550. Epub 2016 May 18.
3
Joint Patch and Multi-label Learning for Facial Action Unit Detection.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2015 Jun;2015:2207-2216. doi: 10.1109/CVPR.2015.7298833.
4
Confidence Preserving Machine for Facial Action Unit Detection.
IEEE Trans Image Process. 2016 Oct;25(10):4753-4767. doi: 10.1109/TIP.2016.2594486. Epub 2016 Jul 27.
5
Crossing Domains for AU Coding: Perspectives, Approaches, and Measures.
IEEE Trans Biom Behav Identity Sci. 2020 Apr;2(2):158-171. doi: 10.1109/tbiom.2020.2977225. Epub 2020 Mar 3.
6
Effects of being watched on eye gaze and facial displays of typical and autistic individuals during conversation.
Autism. 2021 Jan;25(1):210-226. doi: 10.1177/1362361320951691. Epub 2020 Aug 27.
7
Performance-driven facial animation: basic research on human judgments of emotional state in facial avatars.
Cyberpsychol Behav. 2001 Aug;4(4):471-87. doi: 10.1089/109493101750527033.
8
Cross-domain AU Detection: Domains, Learning Approaches, and Measures.
Proc Int Conf Autom Face Gesture Recognit. 2019 May;2019. doi: 10.1109/FG.2019.8756543. Epub 2019 Jul 11.
9
Detection of Genuine and Posed Facial Expressions of Emotion: Databases and Methods.
Front Psychol. 2021 Jan 15;11:580287. doi: 10.3389/fpsyg.2020.580287. eCollection 2020.
10
Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning.
IEEE Trans Affect Comput. 2022 Jan-Mar;13(1):135-146. doi: 10.1109/taffc.2019.2949314. Epub 2019 Oct 30.

引用本文的文献

1
Multimodal Prediction of Obsessive-Compulsive Disorder and Comorbid Depression Severity and Energy Delivered by Deep Brain Electrodes.
IEEE Trans Affect Comput. 2024 Oct-Dec;15(4):2025-2041. doi: 10.1109/taffc.2024.3395117. Epub 2024 Apr 30.
2
Multimodal Feature Selection for Detecting Mothers' Depression in Dyadic Interactions with their Adolescent Offspring.
IEEE Trans Affect Comput. 2023 Jan;2023. doi: 10.1109/fg57933.2023.10042796. Epub 2023 Feb 16.
3
SHAP-based Prediction of Mother's History of Depression to Understand the Influence on Child Behavior.
Proc ACM Int Conf Multimodal Interact. 2023;2023:537-544. doi: 10.1145/3577190.3614136. Epub 2023 Oct 9.
5
Development of the RIKEN database for dynamic facial expressions with multiple angles.
Sci Rep. 2023 Dec 8;13(1):21785. doi: 10.1038/s41598-023-49209-8.
6
Infant AFAR: Automated facial action recognition in infants.
Behav Res Methods. 2023 Apr;55(3):1024-1035. doi: 10.3758/s13428-022-01863-y. Epub 2022 May 10.
7
Macro- and Micro-Expressions Facial Datasets: A Survey.
Sensors (Basel). 2022 Feb 16;22(4):1524. doi: 10.3390/s22041524.
9
Toward Multimodal Modeling of Emotional Expressiveness.
Proc ACM Int Conf Multimodal Interact. 2020 Oct;2020:548-557. doi: 10.1145/3382507.3418887.
10
Crossing Domains for AU Coding: Perspectives, Approaches, and Measures.
IEEE Trans Biom Behav Identity Sci. 2020 Apr;2(2):158-171. doi: 10.1109/tbiom.2020.2977225. Epub 2020 Mar 3.

本文引用的文献

1
Dense 3D Face Alignment from 2D Video for Real-Time Use.
Image Vis Comput. 2017 Feb;58:13-24. doi: 10.1016/j.imavis.2016.05.009. Epub 2016 May 24.
2
Joint Patch and Multi-label Learning for Facial Action Unit and Holistic Expression Recognition.
IEEE Trans Image Process. 2016 Aug;25(8):3931-3946. doi: 10.1109/TIP.2016.2570550. Epub 2016 May 18.
3
Selective Transfer Machine for Personalized Facial Expression Analysis.
IEEE Trans Pattern Anal Mach Intell. 2017 Mar;39(3):529-545. doi: 10.1109/TPAMI.2016.2547397. Epub 2016 Mar 28.
4
Confidence Preserving Machine for Facial Action Unit Detection.
IEEE Trans Image Process. 2016 Oct;25(10):4753-4767. doi: 10.1109/TIP.2016.2594486. Epub 2016 Jul 27.
5
How much training data for facial action unit detection?
IEEE Int Conf Autom Face Gesture Recognit Workshops. 2015 May;1. doi: 10.1109/FG.2015.7163106.
6
Spontaneous facial expression in unscripted social interactions can be measured automatically.
Behav Res Methods. 2015 Dec;47(4):1136-1147. doi: 10.3758/s13428-014-0536-1.
7
Learning Multiscale Active Facial Patches for Expression Analysis.
IEEE Trans Cybern. 2015 Aug;45(8):1499-510. doi: 10.1109/TCYB.2014.2354351. Epub 2014 Sep 29.
8
Alcohol and group formation: a multimodal investigation of the effects of alcohol on emotion and social bonding.
Psychol Sci. 2012 Aug 1;23(8):869-78. doi: 10.1177/0956797611435134. Epub 2012 Jul 3.
9
The social brain: neural basis of social knowledge.
Annu Rev Psychol. 2009;60:693-716. doi: 10.1146/annurev.psych.60.110707.163514.
10
High agreement but low kappa: II. Resolving the paradoxes.
J Clin Epidemiol. 1990;43(6):551-8. doi: 10.1016/0895-4356(90)90159-m.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验