• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高光谱图像分类的旋转不变注意力网络。

Rotation-Invariant Attention Network for Hyperspectral Image Classification.

作者信息

Zheng Xiangtao, Sun Hao, Lu Xiaoqiang, Xie Wei

出版信息

IEEE Trans Image Process. 2022;31:4251-4265. doi: 10.1109/TIP.2022.3177322. Epub 2022 Jun 29.

DOI:10.1109/TIP.2022.3177322
PMID:35635815
Abstract

Hyperspectral image (HSI) classification refers to identifying land-cover categories of pixels based on spectral signatures and spatial information of HSIs. In recent deep learning-based methods, to explore the spatial information of HSIs, the HSI patch is usually cropped from original HSI as the input. And 3 ×3 convolution is utilized as a key component to capture spatial features for HSI classification. However, the 3 ×3 convolution is sensitive to the spatial rotation of inputs, which results in that recent methods perform worse in rotated HSIs. To alleviate this problem, a rotation-invariant attention network (RIAN) is proposed for HSI classification. First, a center spectral attention (CSpeA) module is designed to avoid the influence of other categories of pixels to suppress redundant spectral bands. Then, a rectified spatial attention (RSpaA) module is proposed to replace 3 ×3 convolution for extracting rotation-invariant spectral-spatial features from HSI patches. The CSpeA module, the 1 ×1 convolution and the RSpaA module are utilized to build the proposed RIAN for HSI classification. Experimental results demonstrate that RIAN is invariant to the spatial rotation of HSIs and has superior performance, e.g., achieving an overall accuracy of 86.53% (1.04% improvement) on the Houston database. The codes of this work are available at https://github.com/spectralpublic/RIAN.

摘要

高光谱图像(HSI)分类是指基于HSI的光谱特征和空间信息来识别像素的土地覆盖类别。在最近基于深度学习的方法中,为了探索HSI的空间信息,通常从原始HSI中裁剪HSI补丁作为输入。并且3×3卷积被用作关键组件来捕获用于HSI分类的空间特征。然而,3×3卷积对输入的空间旋转敏感,这导致最近的方法在旋转的HSI上表现更差。为了缓解这个问题,提出了一种用于HSI分类的旋转不变注意力网络(RIAN)。首先,设计了一个中心光谱注意力(CSpeA)模块,以避免其他类别像素的影响,从而抑制冗余光谱带。然后,提出了一个校正空间注意力(RSpaA)模块来代替3×3卷积,以从HSI补丁中提取旋转不变的光谱空间特征。利用CSpeA模块、1×1卷积和RSpaA模块构建了用于HSI分类的RIAN。实验结果表明,RIAN对HSI的空间旋转具有不变性,并且具有优异的性能,例如,在休斯顿数据库上实现了86.53%的总体准确率(提高了1.04%)。这项工作的代码可在https://github.com/spectralpublic/RIAN上获取。

相似文献

1
Rotation-Invariant Attention Network for Hyperspectral Image Classification.用于高光谱图像分类的旋转不变注意力网络。
IEEE Trans Image Process. 2022;31:4251-4265. doi: 10.1109/TIP.2022.3177322. Epub 2022 Jun 29.
2
A Supervised Segmentation Network for Hyperspectral Image Classification.一种用于高光谱图像分类的监督分割网络。
IEEE Trans Image Process. 2021;30:2810-2825. doi: 10.1109/TIP.2021.3055613. Epub 2021 Feb 12.
3
An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention.一种基于多头自注意力和光谱坐标注意力的高效高光谱图像分类网络。
J Imaging. 2023 Jul 10;9(7):141. doi: 10.3390/jimaging9070141.
4
CEGAT: A CNN and enhanced-GAT based on key sample selection strategy for hyperspectral image classification.CEGAT:一种基于关键样本选择策略的用于高光谱图像分类的卷积神经网络和增强图注意力网络。
Neural Netw. 2023 Nov;168:105-122. doi: 10.1016/j.neunet.2023.08.059. Epub 2023 Sep 17.
5
Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification.基于特征融合和标签平滑的三维 ResNeXt 网络在高光谱图像分类中的应用。
Sensors (Basel). 2020 Mar 16;20(6):1652. doi: 10.3390/s20061652.
6
A Spectral-Spatial-Dependent Global Learning Framework for Insufficient and Imbalanced Hyperspectral Image Classification.一种用于不足且不均衡高光谱图像分类的光谱-空间相关全局学习框架。
IEEE Trans Cybern. 2022 Nov;52(11):11709-11723. doi: 10.1109/TCYB.2021.3070577. Epub 2022 Oct 17.
7
Convolutional Neural Network Based on Bandwise-Independent Convolution and Hard Thresholding for Hyperspectral Band Selection.基于带间独立卷积和硬阈值处理的卷积神经网络用于高光谱带选择。
IEEE Trans Cybern. 2021 Sep;51(9):4414-4428. doi: 10.1109/TCYB.2020.3000725. Epub 2021 Sep 15.
8
Deep Blind Hyperspectral Image Super-Resolution.深度盲超光谱图像超分辨率
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2388-2400. doi: 10.1109/TNNLS.2020.3005234. Epub 2021 Jun 2.
9
Spectral-Spatial Feature Extraction of Hyperspectral Images Based on Propagation Filter.基于传播滤波器的高光谱图像光谱-空间特征提取。
Sensors (Basel). 2018 Jun 20;18(6):1978. doi: 10.3390/s18061978.
10
Identification of Turtle-Shell Growth Year Using Hyperspectral Imaging Combined with an Enhanced Spatial-Spectral Attention 3DCNN and a Transformer.利用高光谱成像结合增强空间-光谱注意力 3DCNN 和 Transformer 识别龟甲生长年。
Molecules. 2023 Sep 4;28(17):6427. doi: 10.3390/molecules28176427.

引用本文的文献

1
Enhancing mental health diagnostics through deep learning-based image classification.通过基于深度学习的图像分类增强心理健康诊断。
Front Med (Lausanne). 2025 Aug 4;12:1627617. doi: 10.3389/fmed.2025.1627617. eCollection 2025.
2
Leveraging potential of limpid attention transformer with dynamic tokenization for hyperspectral image classification.利用具有动态令牌化的清晰注意力变换器在高光谱图像分类中的潜力。
PLoS One. 2025 Aug 4;20(8):e0328160. doi: 10.1371/journal.pone.0328160. eCollection 2025.
3
Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.
基于深度学习的图像分类,用于在人工智能辅助医学成像中整合病理学和放射学。
Sci Rep. 2025 Jul 25;15(1):27029. doi: 10.1038/s41598-025-07883-w.
4
Deep learning-based image classification for AI-assisted integration of pathology and radiology in medical imaging.基于深度学习的图像分类,用于医学成像中病理学与放射学的人工智能辅助整合。
Front Med (Lausanne). 2025 Jun 2;12:1574514. doi: 10.3389/fmed.2025.1574514. eCollection 2025.
5
SSATNet: Spectral-spatial attention transformer for hyperspectral corn image classification.SSATNet:用于高光谱玉米图像分类的光谱-空间注意力变换器
Front Plant Sci. 2025 Jan 16;15:1458978. doi: 10.3389/fpls.2024.1458978. eCollection 2024.
6
Improved YOLOv5-based for small traffic sign detection under complex weather.基于改进的YOLOv5用于复杂天气下的小交通标志检测。
Sci Rep. 2023 Sep 27;13(1):16219. doi: 10.1038/s41598-023-42753-3.