Kadam Pranav, Zhang Min, Liu Shan, Kuo C-C Jay
IEEE Trans Image Process. 2022;31:2710-2725. doi: 10.1109/TIP.2022.3160609. Epub 2022 Mar 29.
Inspired by the recent PointHop classification method, an unsupervised 3D point cloud registration method, called R-PointHop, is proposed in this work. R-PointHop first determines a local reference frame (LRF) for every point using its nearest neighbors and finds local attributes. Next, R-PointHop obtains local-to-global hierarchical features by point downsampling, neighborhood expansion, attribute construction and dimensionality reduction steps. Thus, point correspondences are built in hierarchical feature space using the nearest neighbor rule. Afterwards, a subset of salient points with good correspondence is selected to estimate the 3D transformation. The use of the LRF allows for invariance of the hierarchical features of points with respect to rotation and translation, thus making R-PointHop more robust at building point correspondence, even when the rotation angles are large. Experiments are conducted on the 3DMatch, ModelNet40, and Stanford Bunny datasets, which demonstrate the effectiveness of R-PointHop for 3D point cloud registration. R-PointHop's model size and training time are an order of magnitude smaller than those of deep learning methods, and its registration errors are smaller, making it a green and accurate solution. Our codes are available on GitHub (https://github.com/pranavkdm/R-PointHop).
受最近的PointHop分类方法的启发,本文提出了一种无监督的三维点云配准方法,称为R-PointHop。R-PointHop首先使用每个点的最近邻确定其局部参考系(LRF),并找到局部属性。接下来,R-PointHop通过点下采样、邻域扩展、属性构建和降维步骤获得局部到全局的层次特征。因此,使用最近邻规则在层次特征空间中建立点对应关系。之后,选择具有良好对应关系的显著点子集来估计三维变换。LRF的使用使得点的层次特征在旋转和平移方面具有不变性,从而使得R-PointHop在建立点对应关系时更加稳健,即使旋转角度很大。在3DMatch、ModelNet40和斯坦福兔子数据集上进行了实验,这些实验证明了R-PointHop在三维点云配准方面的有效性。R-PointHop的模型大小和训练时间比深度学习方法小一个数量级,并且其配准误差更小,使其成为一种绿色且准确的解决方案。我们的代码可在GitHub(https://github.com/pranavkdm/R-PointHop)上获取。