Yang Jingyu, Guo Daoliang, Li Kun, Wu Zhenchao, Lai Yu-Kun
IEEE Trans Image Process. 2019 Oct;28(10):4746-4761. doi: 10.1109/TIP.2019.2909197. Epub 2019 Apr 4.
We present a novel global non-rigid registration method for dynamic 3D objects. Our method allows objects to undergo large non-rigid deformations and achieves high-quality results even with substantial pose change or camera motion between views. In addition, our method does not require a template prior and uses less raw data than tracking-based methods since only a sparse set of scans is needed. We simultaneously compute the deformations of all the scans by optimizing a global alignment problem to avoid the well-known loop closure problem and use an as-rigid-as-possible constraint to eliminate the shrinkage problem of the deformed shapes, especially near open boundaries of scans. To cope with large-scale problems, we design a coarse-to-fine multi-resolution scheme, which also avoids the optimization being trapped into local minima. The proposed method is evaluated on public datasets and real datasets captured by an RGB-D sensor. The experimental results demonstrate that the proposed method obtains better results than several state-of-the-art methods.
我们提出了一种用于动态三维物体的新型全局非刚性配准方法。我们的方法允许物体经历大的非刚性变形,即使在视图之间存在显著的姿态变化或相机运动时也能获得高质量的结果。此外,我们的方法不需要事先有模板,并且与基于跟踪的方法相比使用的原始数据更少,因为只需要一组稀疏的扫描数据。我们通过优化一个全局对齐问题来同时计算所有扫描数据的变形,以避免众所周知的闭环问题,并使用尽可能刚性的约束来消除变形形状的收缩问题,特别是在扫描数据的开放边界附近。为了处理大规模问题,我们设计了一种从粗到细的多分辨率方案,这也避免了优化陷入局部最小值。我们在由RGB-D传感器捕获的公共数据集和真实数据集上对所提出的方法进行了评估。实验结果表明,所提出的方法比几种最新方法获得了更好的结果。