Lei Xiangda, Wang Hongtao, Wang Cheng, Zhao Zongze, Miao Jianqi, Tian Puguang
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454000, China.
Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China.
Sensors (Basel). 2020 Dec 6;20(23):6969. doi: 10.3390/s20236969.
Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.
机载激光扫描(ALS)点云已在各个领域得到广泛应用,因为它能够大规模高精度地获取三维数据。然而,由于ALS数据是离散、不规则分布且包含噪声的,从三维点云中准确识别各种典型地面物体仍然是一项挑战。近年来,许多研究人员通过使用不同的深度学习方法在三维点云分类方面取得了更好的成果。然而,这些方法大多需要大量的训练样本,无法在复杂场景中广泛应用。在本文中,我们提出了一种ALS点云分类方法,将改进的全卷积网络集成到具有多尺度和多视角深度特征的迁移学习中。首先,通过多尺度体素和多视角投影提取机载激光扫描点云的浅层特征,如高度、强度和曲率变化,以生成特征图。其次,将这些特征图输入到预训练的DenseNet201模型中以获得深度特征,这些深度特征被用作具有卷积层和池化层的全卷积神经网络的输入。通过使用该网络,整合局部和全局特征以对ALS点云进行分类。最后,使用一种考虑上下文信息的图割算法来细化分类结果。我们在国际摄影测量与遥感学会(ISPRS)的语义三维标注数据集上测试了我们的方法。实验结果表明,当仅使用16000个原始数据点进行训练时,所提方法获得的总体准确率和平均F1分数分别为89.84%和83.62%。