Department of Computing, Imperial College London, 180 Queen’s Gate, London SW7 2AZ, United Kingdom.
IEEE Trans Pattern Anal Mach Intell. 2013 Jun;35(6):1357-69. doi: 10.1109/TPAMI.2012.233.
We propose a method for head-pose invariant facial expression recognition that is based on a set of characteristic facial points. To achieve head-pose invariance, we propose the Coupled Scaled Gaussian Process Regression (CSGPR) model for head-pose normalization. In this model, we first learn independently the mappings between the facial points in each pair of (discrete) nonfrontal poses and the frontal pose, and then perform their coupling in order to capture dependences between them. During inference, the outputs of the coupled functions from different poses are combined using a gating function, devised based on the head-pose estimation for the query points. The proposed model outperforms state-of-the-art regression-based approaches to head-pose normalization, 2D and 3D Point Distribution Models (PDMs), and Active Appearance Models (AAMs), especially in cases of unknown poses and imbalanced training data. To the best of our knowledge, the proposed method is the first one that is able to deal with expressive faces in the range from -45° to +45° pan rotation and -30° to +30° tilt rotation, and with continuous changes in head pose, despite the fact that training was conducted on a small set of discrete poses. We evaluate the proposed method on synthetic and real images depicting acted and spontaneously displayed facial expressions.
我们提出了一种基于一组特征面部点的头部不变面部表情识别方法。为了实现头部不变性,我们提出了耦合缩放高斯过程回归 (CSGPR) 模型进行头部姿态归一化。在这个模型中,我们首先独立学习每一对(离散)非正面姿势和正面姿势之间的映射,然后对它们进行耦合,以捕捉它们之间的依赖性。在推理过程中,来自不同姿势的耦合函数的输出使用基于查询点的头部姿态估计的门控函数进行组合。所提出的模型在头部姿态归一化、2D 和 3D 点分布模型 (PDM) 和主动外观模型 (AAM) 方面优于最先进的基于回归的方法,尤其是在未知姿势和不平衡训练数据的情况下。据我们所知,该方法是第一个能够处理范围在-45°至+45°偏航旋转和-30°至+30°倾斜旋转以及连续变化头部姿势的表情丰富的面部的方法,尽管训练是在一小组离散姿势上进行的。我们在描绘扮演和自发展示的面部表情的合成和真实图像上评估了所提出的方法。