Ma Bingpeng, Shan Shiguang, Chen Xilin, Gao Wen
ICT-ISVISION Joint Research and Development Laboratory for Face Recognition, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100090, China.
IEEE Trans Syst Man Cybern B Cybern. 2008 Dec;38(6):1501-12. doi: 10.1109/TSMCB.2008.928231.
This paper proposes a novel method to estimate the head yaw rotations based on the asymmetry of 2-D facial appearance. In traditional appearance-based pose estimation methods, features are typically extracted holistically by subspace analysis such as principal component analysis, linear discriminant analysis (LDA), etc., which are not designed to directly model the pose variations. In this paper, we argue and reveal that the asymmetry in the intensities of each row of the face image is closely relevant to the yaw rotation of the head and, at the same time, evidently insensitive to the identity of the input face. Specifically, to extract the asymmetry information, 1-D Gabor filters and Fourier transform are exploited. LDA is further applied to the asymmetry features to enhance the discrimination ability. By using the simple nearest centroid classifier, experimental results on two multipose databases show that the proposed features outperform other features. In particular, the generalization of the proposed asymmetry features is verified by the impressive performance when the training and the testing data sets are heterogeneous.
本文提出了一种基于二维面部外观不对称性来估计头部偏航旋转的新方法。在传统的基于外观的姿态估计方法中,特征通常通过诸如主成分分析、线性判别分析(LDA)等子空间分析进行整体提取,这些方法并非旨在直接对姿态变化进行建模。在本文中,我们论证并揭示了面部图像每行强度的不对称性与头部的偏航旋转密切相关,同时对输入面部的身份明显不敏感。具体而言,为了提取不对称信息,利用了一维Gabor滤波器和傅里叶变换。LDA进一步应用于不对称特征以增强判别能力。通过使用简单的最近质心分类器,在两个多姿态数据库上的实验结果表明,所提出的特征优于其他特征。特别是,当训练集和测试集不同质时,所提出的不对称特征的泛化能力通过令人印象深刻的性能得到了验证。