IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7854-7870. doi: 10.1109/TPAMI.2021.3113164. Epub 2022 Oct 4.
In this paper, we propose an efficient method for robust and accurate 3D self-portraits using a single RGBD camera. Our method can generate detailed and realistic 3D self-portraits in seconds and shows the ability to handle subjects wearing extremely loose clothes. To achieve highly efficient and robust reconstruction, we propose PIFusion, which combines learning-based 3D recovery with volumetric non-rigid fusion to generate accurate sparse partial scans of the subject. Meanwhile, a non-rigid volumetric deformation method is proposed to continuously refine the learned shape prior. Moreover, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only "loop" with each other but also remain consistent with the selected live key observations. Finally, to further generate realistic portraits, we propose non-rigid texture optimization to improve the texture quality. Additionally, we also contribute a benchmark for single-view 3D self-portrait reconstruction, an evaluation dataset that contains 10 single-view RGBD sequences of a self-rotating performer wearing various clothes and the corresponding ground-truth 3D models in the first frame of each sequence. The results and experiments based on this dataset show that the proposed method outperforms state-of-the-art methods on accuracy, efficiency, and generality.
在本文中,我们提出了一种使用单 RGBD 相机进行鲁棒、准确的 3D 自画像的有效方法。我们的方法可以在几秒钟内生成详细、逼真的 3D 自画像,并显示出处理穿着极其宽松衣服的主体的能力。为了实现高效、鲁棒的重建,我们提出了 PIFusion,它将基于学习的 3D 恢复与体积非刚性融合相结合,生成主体准确的稀疏部分扫描。同时,提出了一种非刚性体积变形方法来不断细化学习的形状先验。此外,提出了一种轻量级的捆绑调整算法,以保证所有部分扫描不仅可以“循环”,而且与所选的实时关键观测结果保持一致。最后,为了进一步生成逼真的人像,我们提出了非刚性纹理优化来提高纹理质量。此外,我们还贡献了一个用于单视图 3D 自画像重建的基准,该基准包含一个自旋转表演者穿着各种衣服的 10 个单视图 RGBD 序列以及每个序列中第一帧的相应地面真实 3D 模型。基于该数据集的结果和实验表明,所提出的方法在准确性、效率和通用性方面优于最先进的方法。