Liu Gexin, Xue Ruixiang, Li Jiaxin, Ding Dandan, Ma Zhan
IEEE Trans Vis Comput Graph. 2024 Oct;30(10):6740-6753. doi: 10.1109/TVCG.2023.3336936. Epub 2024 Sep 4.
The lossy Geometry-based Point Cloud Compression (G-PCC) inevitably impairs the geometry information of point clouds, which deteriorates the quality of experience (QoE) in reconstruction and/or misleads decisions in tasks such as classification. To tackle it, this work proposes GRNet for the geometry restoration of G-PCC compressed large-scale point clouds. By analyzing the content characteristics of original and G-PCC compressed point clouds, we attribute the G-PCC distortion to two key factors: point vanishing and point displacement. Visible impairments on a point cloud are usually dominated by an individual factor or superimposed by both factors, which are determined by the density of the original point cloud. To this end, we employ two different models for coordinate reconstruction, termed Coordinate Expansion and Coordinate Refinement, to attack the point vanishing and displacement, respectively. In addition, 4-byte auxiliary density information is signaled in the bitstream to assist the selection of Coordinate Expansion, Coordinate Refinement, or their combination. Before being fed into the coordinate reconstruction module, the G-PCC compressed point cloud is first processed by a Feature Analysis Module for multiscale information fusion, in which kNN-based Transformer is leveraged at each scale to adaptively characterize neighborhood geometric dynamics for effective restoration. Following the common test conditions recommended in the MPEG standardization committee, GRNet significantly improves the G-PCC anchor and remarkably outperforms state-of-the-art methods on a great variety of point clouds (e.g., solid, dense, and sparse samples) both quantitatively and qualitatively. Meanwhile, GRNet runs fairly fast and uses a smaller-size model when compared with existing learning-based approaches, making it attractive to industry practitioners.
基于有损几何的点云压缩(G-PCC)不可避免地会损害点云的几何信息,这会降低重建过程中的体验质量(QoE)和/或在诸如分类等任务中误导决策。为了解决这个问题,这项工作提出了GRNet用于G-PCC压缩的大规模点云的几何恢复。通过分析原始点云和G-PCC压缩点云的内容特征,我们将G-PCC失真归因于两个关键因素:点消失和点位移。点云上可见的损伤通常由单个因素主导或由两个因素叠加,这取决于原始点云的密度。为此,我们采用两种不同的坐标重建模型,称为坐标扩展和坐标细化,分别针对点消失和位移进行处理。此外,在比特流中传输4字节的辅助密度信息,以辅助坐标扩展、坐标细化或它们的组合的选择。在输入坐标重建模块之前,G-PCC压缩点云首先由特征分析模块进行多尺度信息融合处理,其中在每个尺度上利用基于kNN的Transformer自适应地表征邻域几何动态以进行有效恢复。遵循MPEG标准化委员会推荐的通用测试条件,GRNet显著改进了G-PCC基准,并在各种点云(如实心、密集和稀疏样本)上在定量和定性方面均显著优于现有方法。同时,与现有的基于学习的方法相比,GRNet运行速度相当快且使用较小尺寸的模型,这使其对行业从业者具有吸引力。