Li Ruibo, Zhang Chi, Wang Zhe, Shen Chunhua, Lin Guosheng
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):8106-8122. doi: 10.1109/TPAMI.2024.3401029. Epub 2024 Nov 6.
In this article, we investigate self-supervised 3D scene flow estimation and class-agnostic motion prediction on point clouds. A realistic scene can be well modeled as a collection of rigidly moving parts, therefore its scene flow can be represented as a combination of rigid motion of these individual parts. Building upon this observation, we propose to generate pseudo scene flow labels for self-supervised learning through piecewise rigid motion estimation, in which the source point cloud is decomposed into local regions and each region is treated as rigid. By rigidly aligning each region with its potential counterpart in the target point cloud, we obtain a region-specific rigid transformation to generate its pseudo flow labels. To mitigate the impact of potential outliers on label generation, when solving the rigid registration for each region, we alternately perform three steps: establishing point correspondences, measuring the confidence for the correspondences, and updating the rigid transformation based on the correspondences and their confidence. As a result, confident correspondences will dominate label generation, and a validity mask will be derived for the generated pseudo labels. By using the pseudo labels together with their validity mask for supervision, models can be trained in a self-supervised manner. Extensive experiments on FlyingThings3D and KITTI datasets demonstrate that our method achieves new state-of-the-art performance in self-supervised scene flow learning, without any ground truth scene flow for supervision, even performing better than some supervised counterparts. Additionally, our method is further extended to class-agnostic motion prediction and significantly outperforms previous state-of-the-art self-supervised methods on nuScenes dataset.
在本文中,我们研究了点云上的自监督3D场景流估计和类别无关的运动预测。一个现实场景可以很好地建模为一组刚性移动的部分,因此其场景流可以表示为这些单个部分的刚性运动的组合。基于这一观察结果,我们建议通过分段刚性运动估计来生成用于自监督学习的伪场景流标签,其中源点云被分解为局部区域,每个区域被视为刚性的。通过将每个区域与其在目标点云中的潜在对应区域进行刚性对齐,我们获得一个区域特定的刚性变换以生成其伪流标签。为了减轻潜在异常值对标签生成的影响,在求解每个区域的刚性配准问题时,我们交替执行三个步骤:建立点对应关系、测量对应关系的置信度,以及基于对应关系及其置信度更新刚性变换。结果,置信度高的对应关系将主导标签生成,并为生成的伪标签导出一个有效性掩码。通过使用伪标签及其有效性掩码进行监督,可以以自监督的方式训练模型。在FlyingThings3D和KITTI数据集上进行的大量实验表明,我们的方法在自监督场景流学习中取得了新的领先性能,无需任何用于监督的真实场景流,甚至比一些有监督的对应方法表现更好。此外,我们的方法进一步扩展到类别无关的运动预测,并在nuScenes数据集上显著优于先前的领先自监督方法。