Girard Jeffrey M, Chu Wen-Sheng, Jeni László A, Cohn Jeffrey F, De la Torre Fernando, Sayette Michael A
Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260.
Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213.
Proc Int Conf Autom Face Gesture Recognit. 2017 May-Jun;2017:581-588. doi: 10.1109/FG.2017.144. Epub 2017 Jun 29.
Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.
尽管面部表情在人际交流中发挥着重要作用,而且我们也知道人际行为会受到社会背景的影响,但目前可用的面部表情数据库中都不包含多个相互作用的参与者。赛耶特群体形成任务(GFT)数据库满足了对无脚本互动中多个参与者的经过充分注释的视频的需求。该数据库包含来自32个三人小组中96名参与者的172,800个视频帧。为了帮助开发自动面部表情分析系统,GFT包括对面部动作编码系统(FACS)出现情况和强度的专家注释、面部地标跟踪,以及线性支持向量机、深度学习、主动补丁学习和个性化分类的基线结果。使用相同的划分和各种指标(包括均值和置信区间)对基线性能进行量化和比较。深度学习和主动补丁学习方法的性能得分最高。详情请访问http://osf.io/7wcyz。