School of Computer and Information Engineering, Henan University, Kaifeng, China.
Department of Geography, Kent State University, Kent, OH, USA.
Comput Intell Neurosci. 2022 May 26;2022:7071485. doi: 10.1155/2022/7071485. eCollection 2022.
In recent years, deep learning has been widely used in hyperspectral image (HSI) classification and has shown good capabilities. Particularly, the use of convolutional neural network (CNN) in HSI classification has achieved attractive performance. However, HSI contains a lot of redundant information, and the CNN-based model is limited by the receptive field of CNN and cannot balance the performance and depth of the model. Furthermore, considering that HSI can be regarded as sequence data, CNN-based models cannot mine sequence features well. In this paper, we propose a model named SSA-Transformer to address the above problems and extract spectral-spatial features of HSI more efficiently. The SSA-Transformer model combines a modified CNN-based spectral-spatial attention mechanism and a self-attention-based transformer with dense connection. The SSA-Transformer model can combine the local and global features of HSI to improve the performance of the model. A series of experiments showed that the SSA-Transformer achieved competitive classification accuracy compared with other CNN-based classification methods using three HSI datasets: University of Pavia (PU), Salinas (SA), and Kennedy Space Center (KSC).
近年来,深度学习在高光谱图像(HSI)分类中得到了广泛应用,并表现出了良好的性能。特别是卷积神经网络(CNN)在 HSI 分类中的应用取得了吸引人的成果。然而,HSI 包含大量冗余信息,基于 CNN 的模型受到 CNN 感受野的限制,无法平衡模型的性能和深度。此外,考虑到 HSI 可以看作是序列数据,基于 CNN 的模型不能很好地挖掘序列特征。在本文中,我们提出了一种名为 SSA-Transformer 的模型,以解决上述问题,并更有效地提取 HSI 的光谱-空间特征。SSA-Transformer 模型结合了基于修改的 CNN 的光谱-空间注意力机制和具有密集连接的基于自注意力的转换器。SSA-Transformer 模型可以结合 HSI 的局部和全局特征,提高模型的性能。一系列实验表明,与使用三个 HSI 数据集(帕维亚大学(PU)、萨利纳斯(SA)和肯尼迪航天中心(KSC))的其他基于 CNN 的分类方法相比,SSA-Transformer 实现了具有竞争力的分类精度。