• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于通道注意力和特征信息熵的遥感数据语义分割

Semantic Segmentation of Remote Sensing Data Based on Channel Attention and Feature Information Entropy.

作者信息

Duan Sining, Zhao Jingyi, Huang Xinyi, Zhao Shuhe

机构信息

Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Key Laboratory for Land Satellite Remote Sensing Applications of Ministry of Natural Resources, School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China.

Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China.

出版信息

Sensors (Basel). 2024 Feb 19;24(4):1324. doi: 10.3390/s24041324.

DOI:10.3390/s24041324
PMID:38400482
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10892758/
Abstract

The common channel attention mechanism maps feature statistics to feature weights. However, the effectiveness of this mechanism may not be assured in remotely sensing images due to statistical differences across multiple bands. This paper proposes a novel channel attention mechanism based on feature information called the feature information entropy attention mechanism (FEM). The FEM constructs a relationship between features based on feature information entropy and then maps this relationship to their importance. The Vaihingen dataset and OpenEarthMap dataset are selected for experiments. The proposed method was compared with the squeeze-and-excitation mechanism (SEM), the convolutional block attention mechanism (CBAM), and the frequency channel attention mechanism (FCA). Compared with these three channel attention mechanisms, the mIoU of the FEM in the Vaihingen dataset is improved by 0.90%, 1.10%, and 0.40%, and in the OpenEarthMap dataset, it is improved by 2.30%, 2.20%, and 2.10%, respectively. The proposed channel attention mechanism in this paper shows better performance in remote sensing land use classification.

摘要

通用通道注意力机制将特征统计信息映射为特征权重。然而,由于多波段之间的统计差异,这种机制在遥感图像中的有效性可能无法得到保证。本文提出了一种基于特征信息的新型通道注意力机制,称为特征信息熵注意力机制(FEM)。FEM基于特征信息熵构建特征之间的关系,然后将这种关系映射到它们的重要性上。选择了Vaihingen数据集和OpenEarthMap数据集进行实验。将所提出的方法与挤压激励机制(SEM)、卷积块注意力机制(CBAM)和频率通道注意力机制(FCA)进行了比较。与这三种通道注意力机制相比,FEM在Vaihingen数据集中的mIoU分别提高了0.90%、1.10%和0.40%,在OpenEarthMap数据集中分别提高了2.30%、2.20%和2.10%。本文提出的通道注意力机制在遥感土地利用分类中表现出更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/79d7703cbdb0/sensors-24-01324-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/083c499157d9/sensors-24-01324-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/bd3a1cff0074/sensors-24-01324-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/277b087e3f61/sensors-24-01324-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/57516b47870e/sensors-24-01324-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/972a3facff31/sensors-24-01324-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/729c00cabac8/sensors-24-01324-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/c5f64338a7c6/sensors-24-01324-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/c28b0da544d3/sensors-24-01324-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/0a21a525301f/sensors-24-01324-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/38776fcd99de/sensors-24-01324-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/f8ee0946e391/sensors-24-01324-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/ca35ec3d0d6a/sensors-24-01324-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/234225978eb0/sensors-24-01324-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/f0ea4e5ba321/sensors-24-01324-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/eb8fd752f4d0/sensors-24-01324-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/25b96cf09179/sensors-24-01324-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/8e147603128b/sensors-24-01324-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/79d7703cbdb0/sensors-24-01324-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/083c499157d9/sensors-24-01324-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/bd3a1cff0074/sensors-24-01324-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/277b087e3f61/sensors-24-01324-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/57516b47870e/sensors-24-01324-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/972a3facff31/sensors-24-01324-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/729c00cabac8/sensors-24-01324-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/c5f64338a7c6/sensors-24-01324-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/c28b0da544d3/sensors-24-01324-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/0a21a525301f/sensors-24-01324-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/38776fcd99de/sensors-24-01324-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/f8ee0946e391/sensors-24-01324-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/ca35ec3d0d6a/sensors-24-01324-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/234225978eb0/sensors-24-01324-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/f0ea4e5ba321/sensors-24-01324-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/eb8fd752f4d0/sensors-24-01324-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/25b96cf09179/sensors-24-01324-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/8e147603128b/sensors-24-01324-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/14a2/10892758/79d7703cbdb0/sensors-24-01324-g018.jpg

相似文献

1
Semantic Segmentation of Remote Sensing Data Based on Channel Attention and Feature Information Entropy.基于通道注意力和特征信息熵的遥感数据语义分割
Sensors (Basel). 2024 Feb 19;24(4):1324. doi: 10.3390/s24041324.
2
MFCA-Net: a deep learning method for semantic segmentation of remote sensing images.MFCA-Net:一种用于遥感图像语义分割的深度学习方法。
Sci Rep. 2024 Mar 8;14(1):5745. doi: 10.1038/s41598-024-56211-1.
3
Semantic segmentation of UAV remote sensing images based on edge feature fusing and multi-level upsampling integrated with Deeplabv3.基于边缘特征融合和多级上采样的 Deeplabv3 融合的无人机遥感图像语义分割
PLoS One. 2023 Jan 20;18(1):e0279097. doi: 10.1371/journal.pone.0279097. eCollection 2023.
4
MAFF-Net: Multi-Attention Guided Feature Fusion Network for Change Detection in Remote Sensing Images.MAFF-Net:用于遥感图像变化检测的多注意力引导特征融合网络。
Sensors (Basel). 2022 Jan 24;22(3):888. doi: 10.3390/s22030888.
5
An improved semantic segmentation algorithm for high-resolution remote sensing images based on DeepLabv3.一种基于DeepLabv3的高分辨率遥感影像改进语义分割算法。
Sci Rep. 2024 Apr 27;14(1):9716. doi: 10.1038/s41598-024-60375-1.
6
High-Resolution Aerial Imagery Semantic Labeling with Dense Pyramid Network.高分辨率航空影像语义标注的密集金字塔网络方法
Sensors (Basel). 2018 Nov 5;18(11):3774. doi: 10.3390/s18113774.
7
A deep inverse convolutional neural network-based semantic classification method for land cover remote sensing images.一种基于深度逆卷积神经网络的土地覆盖遥感影像语义分类方法。
Sci Rep. 2024 Mar 27;14(1):7313. doi: 10.1038/s41598-024-57408-0.
8
Research on Ground Object Classification Method of High Resolution Remote-Sensing Images Based on Improved DeeplabV3.基于改进型 DeeplabV3 的高分辨率遥感图像地物分类方法研究
Sensors (Basel). 2022 Oct 2;22(19):7477. doi: 10.3390/s22197477.
9
A deep learning method for optimizing semantic segmentation accuracy of remote sensing images based on improved UNet.基于改进型 UNet 的遥感图像语义分割精度优化的深度学习方法。
Sci Rep. 2023 May 10;13(1):7600. doi: 10.1038/s41598-023-34379-2.
10
Research on land cover classification of multi-source remote sensing data based on improved U-net network.基于改进U-net网络的多源遥感数据土地覆盖分类研究
Sci Rep. 2023 Sep 28;13(1):16275. doi: 10.1038/s41598-023-43317-1.

本文引用的文献

1
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.DeepLab:基于深度卷积网络、空洞卷积和全连接条件随机场的语义图像分割。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848. doi: 10.1109/TPAMI.2017.2699184. Epub 2017 Apr 27.
2
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.