School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China.
School of Geography and Remote Sensing, Guangzhou University, Guangzhou 510006, China.
Sensors (Basel). 2021 Sep 15;21(18):6193. doi: 10.3390/s21186193.
Classification is a fundamental task for airborne laser scanning (ALS) point cloud processing and applications. This task is challenging due to outdoor scenes with high complexity and point clouds with irregular distribution. Many existing methods based on deep learning techniques have drawbacks, such as complex pre/post-processing steps, an expensive sampling cost, and a limited receptive field size. In this paper, we propose a graph attention feature fusion network (GAFFNet) that can achieve a satisfactory classification performance by capturing wider contextual information of the ALS point cloud. Based on the graph attention mechanism, we first design a neighborhood feature fusion unit and an extended neighborhood feature fusion block, which effectively increases the receptive field for each point. On this basis, we further design a neural network based on encoder-decoder architecture to obtain the semantic features of point clouds at different levels, allowing us to achieve a more accurate classification. We evaluate the performance of our method on a publicly available ALS point cloud dataset provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The experimental results show that our method can effectively distinguish nine types of ground objects. We achieve more satisfactory results on different evaluation metrics when compared with the results obtained via other approaches.
分类是机载激光扫描 (ALS) 点云处理和应用的基本任务。由于户外场景的复杂性高,点云分布不规则,因此这项任务具有挑战性。许多基于深度学习技术的现有方法存在缺点,例如复杂的预处理/后处理步骤、昂贵的采样成本和有限的感受野大小。在本文中,我们提出了一种图注意力特征融合网络 (GAFFNet),它可以通过捕获 ALS 点云的更广泛的上下文信息来实现令人满意的分类性能。基于图注意力机制,我们首先设计了一个邻域特征融合单元和一个扩展邻域特征融合块,有效地增加了每个点的感受野。在此基础上,我们进一步设计了一个基于编解码器架构的神经网络,以获得不同层次的点云的语义特征,从而实现更准确的分类。我们在国际摄影测量与遥感学会 (ISPRS) 提供的公开可用的 ALS 点云数据集上评估了我们方法的性能。实验结果表明,我们的方法可以有效地区分九种地物类型。与其他方法相比,我们在不同的评估指标上取得了更令人满意的结果。