• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于3DJA-UNet3从遥感影像中提取建筑物的方法。

A method for extracting buildings from remote sensing images based on 3DJA-UNet3.

作者信息

Li Yingjian, Li Yonggang, Zhu Xiangbin, Fang Haojie, Ye Lihua

机构信息

School of Computer Science and Technology, Zhejiang Normal University, Jinhua, 321004, China.

College of Information Science and Engineering, Jiaxing University, Jiaxing, 314001, China.

出版信息

Sci Rep. 2024 Aug 17;14(1):19067. doi: 10.1038/s41598-024-70019-z.

DOI:10.1038/s41598-024-70019-z
PMID:39154127
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11330448/
Abstract

Building extraction aims to extract building pixels from remote sensing imagery, which plays a significant role in urban planning, dynamic urban monitoring, and many other applications. UNet3+ is widely applied in building extraction from remote sensing images. However, it still faces issues such as low segmentation accuracy, imprecise boundary delineation, and the complexity of network models. Therefore, based on the UNet3+ model, this paper proposes a 3D Joint Attention (3DJA) module that effectively enhances the correlation between local and global features, obtaining more accurate object semantic information and enhancing feature representation. The 3DJA module models semantic interdependence in the vertical and horizontal dimensions to obtain feature map spatial encoding information, as well as in the channel dimensions to increase the correlation between dependent channel graphs. In addition, a bottleneck module is constructed to reduce the number of network parameters and improve model training efficiency. Many experiments are conducted on publicly accessible WHU,INRIA and Massachusetts building dataset, and the benchmarks, BOMSC-Net, CVNet, SCA-Net, SPCL-Net, ACMFNet, MFCF-Net models are selected for comparison with the 3DJA-UNet3+ model proposed in this paper. The experimental results show that 3DJA-UNet3+ achieves competitive results in three evaluation indicators: overall accuracy, mean intersection over union, and F1-score. The code will be available at https://github.com/EnjiLi/3DJA-UNet3Plus .

摘要

建筑物提取旨在从遥感影像中提取建筑物像素,这在城市规划、城市动态监测及许多其他应用中发挥着重要作用。UNet3+被广泛应用于遥感影像的建筑物提取。然而,它仍然面临着诸如分割精度低、边界划定不精确以及网络模型复杂等问题。因此,本文基于UNet3+模型,提出了一种3D联合注意力(3DJA)模块,该模块有效地增强了局部和全局特征之间的相关性,获得了更准确的目标语义信息并增强了特征表示。3DJA模块在垂直和水平维度上对语义相互依赖性进行建模,以获得特征图空间编码信息,同时在通道维度上增加相关通道图之间的相关性。此外,构建了一个瓶颈模块以减少网络参数数量并提高模型训练效率。在可公开获取的WHU、INRIA和马萨诸塞州建筑物数据集上进行了许多实验,并选择了基准模型BOMSC-Net、CVNet、SCA-Net、SPCL-Net、ACMFNet、MFCF-Net与本文提出的3DJA-UNet3+模型进行比较。实验结果表明,3DJA-UNet3+在总体精度、平均交并比和F1分数这三个评估指标上取得了具有竞争力的结果。代码将在https://github.com/EnjiLi/3DJA-UNet3Plus上提供。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/3db9bfed6fc4/41598_2024_70019_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/bc9a4990f71d/41598_2024_70019_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/5a27d4bec4fc/41598_2024_70019_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/887e58b7dc6a/41598_2024_70019_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/2de5e5a2db01/41598_2024_70019_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/27d6d26df3b9/41598_2024_70019_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/56dae9d13cc1/41598_2024_70019_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/bb00dabfff82/41598_2024_70019_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/f89fdbc6c199/41598_2024_70019_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/a884bd351ffa/41598_2024_70019_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/916555f1f9d9/41598_2024_70019_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/3db9bfed6fc4/41598_2024_70019_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/bc9a4990f71d/41598_2024_70019_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/5a27d4bec4fc/41598_2024_70019_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/887e58b7dc6a/41598_2024_70019_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/2de5e5a2db01/41598_2024_70019_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/27d6d26df3b9/41598_2024_70019_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/56dae9d13cc1/41598_2024_70019_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/bb00dabfff82/41598_2024_70019_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/f89fdbc6c199/41598_2024_70019_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/a884bd351ffa/41598_2024_70019_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/916555f1f9d9/41598_2024_70019_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2519/11330448/3db9bfed6fc4/41598_2024_70019_Fig11_HTML.jpg

相似文献

1
A method for extracting buildings from remote sensing images based on 3DJA-UNet3.一种基于3DJA-UNet3从遥感影像中提取建筑物的方法。
Sci Rep. 2024 Aug 17;14(1):19067. doi: 10.1038/s41598-024-70019-z.
2
AGs-Unet: Building Extraction Model for High Resolution Remote Sensing Images Based on Attention Gates U Network.AGs-Unet:基于注意力门控 U 网络的高分辨率遥感图像建筑物提取模型。
Sensors (Basel). 2022 Apr 11;22(8):2932. doi: 10.3390/s22082932.
3
Asymmetric Network Combining CNN and Transformer for Building Extraction from Remote Sensing Images.用于从遥感图像中提取建筑物的结合卷积神经网络和变压器的非对称网络
Sensors (Basel). 2024 Sep 25;24(19):6198. doi: 10.3390/s24196198.
4
A Dual-Branch Fusion Network Based on Reconstructed Transformer for Building Extraction in Remote Sensing Imagery.一种基于重构变压器的双分支融合网络用于遥感影像中的建筑物提取
Sensors (Basel). 2024 Jan 7;24(2):365. doi: 10.3390/s24020365.
5
LOANet: a lightweight network using object attention for extracting buildings and roads from UAV aerial remote sensing images.LOANet:一种使用目标注意力从无人机航空遥感图像中提取建筑物和道路的轻量级网络。
PeerJ Comput Sci. 2023 Jul 11;9:e1467. doi: 10.7717/peerj-cs.1467. eCollection 2023.
6
Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network.基于自注意力网络的高空间分辨率遥感影像稳健建筑物提取
Sensors (Basel). 2020 Dec 17;20(24):7241. doi: 10.3390/s20247241.
7
A Building Extraction Method for High-Resolution Remote Sensing Images with Multiple Attentions and Parallel Encoders Combining Enhanced Spectral Information.一种结合增强光谱信息的多注意力与并行编码器的高分辨率遥感影像建筑物提取方法
Sensors (Basel). 2024 Feb 4;24(3):1006. doi: 10.3390/s24031006.
8
Research on building extraction from remote sensing imagery using efficient lightweight residual network.基于高效轻量级残差网络的遥感影像建筑物提取研究
PeerJ Comput Sci. 2024 May 2;10:e2006. doi: 10.7717/peerj-cs.2006. eCollection 2024.
9
Cloud and snow detection of remote sensing images based on improved Unet3.基于改进型 U-Net3 的遥感图像云雪检测
Sci Rep. 2022 Aug 24;12(1):14415. doi: 10.1038/s41598-022-18812-6.
10
Multi-Scale Attention Network for Building Extraction from High-Resolution Remote Sensing Images.用于从高分辨率遥感图像中提取建筑物的多尺度注意力网络。
Sensors (Basel). 2024 Feb 4;24(3):1010. doi: 10.3390/s24031010.

引用本文的文献

1
Sparse point annotations for remote sensing image segmentation.用于遥感图像分割的稀疏点标注
Sci Rep. 2025 Jul 27;15(1):27347. doi: 10.1038/s41598-025-12969-6.

本文引用的文献

1
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.U-Net++:一种用于医学图像分割的嵌套U-Net架构。
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
2
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.
3
Fully Convolutional Networks for Semantic Segmentation.
全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.