Suppr超能文献

用于从高分辨率遥感图像中提取建筑物的多尺度注意力网络。

Multi-Scale Attention Network for Building Extraction from High-Resolution Remote Sensing Images.

作者信息

Chang Jing, He Xiaohui, Li Panle, Tian Ting, Cheng Xijie, Qiao Mengjia, Zhou Tao, Zhang Beibei, Chang Ziqian, Fan Tingwei

机构信息

School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China.

School of Geoscience and Technology, Zhengzhou University, Zhengzhou 450001, China.

出版信息

Sensors (Basel). 2024 Feb 4;24(3):1010. doi: 10.3390/s24031010.

Abstract

The precise building extraction from high-resolution remote sensing images holds significant application for urban planning, resource management, and environmental conservation. In recent years, deep neural networks (DNNs) have garnered substantial attention for their adeptness in learning and extracting features, becoming integral to building extraction methodologies and yielding noteworthy performance outcomes. Nonetheless, prevailing DNN-based models for building extraction often overlook spatial information during the feature extraction phase. Additionally, many existing models employ a simplistic and direct approach in the feature fusion stage, potentially leading to spurious target detection and the amplification of internal noise. To address these concerns, we present a multi-scale attention network (MSANet) tailored for building extraction from high-resolution remote sensing images. In our approach, we initially extracted multi-scale building feature information, leveraging the multi-scale channel attention mechanism and multi-scale spatial attention mechanism. Subsequently, we employed adaptive hierarchical weighting processes on the extracted building features. Concurrently, we introduced a gating mechanism to facilitate the effective fusion of multi-scale features. The efficacy of the proposed MSANet was evaluated using the WHU aerial image dataset and the WHU satellite image dataset. The experimental results demonstrate compelling performance metrics, with the F1 scores registering at 93.76% and 77.64% on the WHU aerial imagery dataset and WHU satellite dataset II, respectively. Furthermore, the intersection over union (IoU) values stood at 88.25% and 63.46%, surpassing benchmarks set by DeepLabV3 and GSMC.

摘要

从高分辨率遥感图像中精确提取建筑物在城市规划、资源管理和环境保护方面具有重要应用价值。近年来,深度神经网络(DNN)因其在学习和提取特征方面的能力而备受关注,成为建筑物提取方法的重要组成部分,并取得了显著的性能成果。然而,现有的基于DNN的建筑物提取模型在特征提取阶段往往忽略空间信息。此外,许多现有模型在特征融合阶段采用简单直接的方法,可能导致虚假目标检测和内部噪声放大。为了解决这些问题,我们提出了一种专门用于从高分辨率遥感图像中提取建筑物的多尺度注意力网络(MSANet)。在我们的方法中,我们首先利用多尺度通道注意力机制和多尺度空间注意力机制提取多尺度建筑物特征信息。随后,我们对提取的建筑物特征进行自适应分层加权处理。同时,我们引入了一种门控机制来促进多尺度特征的有效融合。使用WHU航空图像数据集和WHU卫星图像数据集对所提出的MSANet的有效性进行了评估。实验结果显示了令人信服的性能指标,在WHU航空图像数据集和WHU卫星数据集II上的F1分数分别为93.76%和77.64%。此外,交并比(IoU)值分别为88.25%和63.46%,超过了DeepLabV3和GSMC设定的基准。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0620/10857135/2cf9244c43b1/sensors-24-01010-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验