Wen Chao, Zhang Yinda, Cao Chenjie, Li Zhuwen, Xue Xiangyang, Fu Yanwei
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2166-2180. doi: 10.1109/TPAMI.2022.3169735. Epub 2023 Jan 6.
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses. While many previous works learn to hallucinate the shape directly from priors, we adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network. Instead of building a direct mapping function from images to 3D shape, our model learns to predict series of deformations to improve a coarse shape iteratively. Inspired by traditional multiple view geometry methods, our network samples nearby area around the initial mesh's vertex locations and reasons an optimal deformation using perceptual feature statistics built from multiple input images. Extensive experiments show that our model produces accurate 3D shapes that are not only visually plausible from the input perspectives, but also well aligned to arbitrary viewpoints. With the help of physically driven architecture, our model also exhibits generalization capability across different semantic categories, and the number of input images. Model analysis experiments show that our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable renderer for test-time optimization.
我们研究了从少量带或不带相机姿态的彩色图像生成三维网格表示中的形状问题。虽然之前的许多工作都致力于直接从先验信息中生成形状,但我们采用了图卷积网络,通过利用跨视图信息来进一步提高形状质量。我们的模型不是构建从图像到三维形状的直接映射函数,而是学习预测一系列变形,以迭代地改进一个粗糙的形状。受传统多视图几何方法的启发,我们的网络在初始网格顶点位置周围采样附近区域,并利用从多个输入图像构建的感知特征统计信息来推断最佳变形。大量实验表明,我们的模型生成的精确三维形状不仅从输入视角看在视觉上是合理的,而且与任意视点都能很好地对齐。借助物理驱动的架构,我们的模型还展现了跨不同语义类别以及输入图像数量的泛化能力。模型分析实验表明,我们的模型对初始网格的质量和相机姿态的误差具有鲁棒性,并且可以与可微渲染器结合用于测试时的优化。