Liang Haoran, Liang Ronghua, Song Mingli, He Xiaofei
IEEE Trans Cybern. 2016 Apr;46(4):890-901. doi: 10.1109/TCYB.2015.2417211. Epub 2015 Apr 9.
The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.
利用二维人脸图像重建带有表情的三维人脸模型的需求,激发了人们对解决人脸建模问题的日益浓厚的兴趣。这项任务在计算机动画领域既重要又具有挑战性。面部轮廓和皱纹对于生成具有特定表情的人脸至关重要;然而,在以往关于人脸模型重建的研究中,这些细节通常被忽略或未得到认真考虑。因此,我们采用耦合径向基函数网络从单张二维人脸图像中推导中间三维人脸模型。为了通过地标进一步优化三维人脸模型,通过局部坐标编码从给定训练集中学习一个与三维人脸模型及其相应三维地标相关的耦合字典。然后构建另一个耦合字典来连接二维和三维地标,以实现人脸模型上顶点的转移。结果,可以生成具有适当表情的最终三维人脸。在测试阶段,将二维输入人脸转换为显示不同表情的三维模型。实验结果表明,所提出的面部表情合成方法比以前的方法能更有效地获取模型细节。