Suppr超能文献

用于多视角和视角不变面部表情识别的判别共享高斯过程。

Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

出版信息

IEEE Trans Image Process. 2015 Jan;24(1):189-204. doi: 10.1109/TIP.2014.2375634. Epub 2014 Nov 26.

Abstract

Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

摘要

面部表情的图像通常由于头部运动或摄像机位置的变化而从不同的视角捕捉。现有的多视角和/或视角不变的面部表情识别方法通常使用分别为每个视角或所有视角学习的分类器对观察到的表情进行分类。然而,这些方法忽略了一个事实,即面部表情的不同视角只是同一面部表情的不同表现形式。通过考虑这种冗余,我们可以为目标任务设计更有效的分类器。为此,我们提出了一种用于多视角和视角不变的多视角面部表情分类的判别共享高斯过程潜在变量模型(DS-GPLVM)。在该模型中,我们首先学习面部表情的多个视角共有的判别流形。随后,我们在表情流形中进行表情分类。最后,以视图不变的方式(仅使用表情的单个视图)或多视图方式(使用表情的多个视图)进行观察到的面部表情的分类。所提出的模型还可以用于以一种有原则的方式融合不同的面部特征。我们在三个公开可用的数据集(MultiPIE、野外标记的面部部位和野外静态面部表情)上对所提出的 DS-GPLVM 进行了验证。结果表明,该模型在多视角和视角不变的面部表情分类方面优于最新方法,并且在多视角学习和特征融合方面也优于几个最新方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验