IEEE Trans Image Process. 2021;30:3815-3827. doi: 10.1109/TIP.2021.3065798. Epub 2021 Mar 25.
We present a novel method to jointly learn a 3D face parametric model and 3D face reconstruction from diverse sources. Previous methods usually learn 3D face modeling from one kind of source, such as scanned data or in-the-wild images. Although 3D scanned data contain accurate geometric information of face shapes, the capture system is expensive and such datasets usually contain a small number of subjects. On the other hand, in-the-wild face images are easily obtained and there are a large number of facial images. However, facial images do not contain explicit geometric information. In this paper, we propose a method to learn a unified face model from diverse sources. Besides scanned face data and face images, we also utilize a large number of RGB-D images captured with an iPhone X to bridge the gap between the two sources. Experimental results demonstrate that with training data from more sources, we can learn a more powerful face model.
我们提出了一种新的方法,能够从多种来源联合学习 3D 人脸参数模型和 3D 人脸重建。以前的方法通常从单一来源学习 3D 人脸建模,例如扫描数据或野外图像。虽然 3D 扫描数据包含准确的人脸形状几何信息,但采集系统昂贵,并且此类数据集通常包含少量的对象。另一方面,野外人脸图像易于获取,并且存在大量的面部图像。然而,人脸图像不包含明确的几何信息。在本文中,我们提出了一种从多种来源学习统一人脸模型的方法。除了扫描人脸数据和人脸图像之外,我们还利用大量使用 iPhone X 拍摄的 RGB-D 图像来弥合这两种来源之间的差距。实验结果表明,通过使用更多来源的训练数据,我们可以学习到更强大的人脸模型。