Suppr超能文献

CRABR-Net:一种基于上下文关系注意力的遥感场景目标识别网络

CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective.

作者信息

Guo Ningbo, Jiang Mingyong, Gao Lijing, Tang Yizhuo, Han Jinwei, Chen Xiangning

机构信息

Space Information Academic, Space Engineering University, Beijing 101407, China.

State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China.

出版信息

Sensors (Basel). 2023 Aug 29;23(17):7514. doi: 10.3390/s23177514.

Abstract

Remote sensing scene objective recognition (RSSOR) plays a serious application value in both military and civilian fields. Convolutional neural networks (CNNs) have greatly enhanced the improvement of intelligent objective recognition technology for remote sensing scenes, but most of the methods using CNN for high-resolution RSSOR either use only the feature map of the last layer or directly fuse the feature maps from various layers in the "summation" way, which not only ignores the favorable relationship information between adjacent layers but also leads to redundancy and loss of feature map, which hinders the improvement of recognition accuracy. In this study, a contextual, relational attention-based recognition network (CRABR-Net) was presented, which extracts different convolutional feature maps from CNN, focuses important feature content by using a simple, parameter-free attention module (SimAM), fuses the adjacent feature maps by using the complementary relationship feature map calculation, improves the feature learning ability by using the enhanced relationship feature map calculation, and finally uses the concatenated feature maps from different layers for RSSOR. Experimental results show that CRABR-Net exploits the relationship between the different CNN layers to improve recognition performance, achieves better results compared to several state-of-the-art algorithms, and the average accuracy on AID, UC-Merced, and RSSCN7 can be up to 96.46%, 99.20%, and 95.43% with generic training ratios.

摘要

遥感场景目标识别(RSSOR)在军事和民用领域都具有重要的应用价值。卷积神经网络(CNN)极大地推动了遥感场景智能目标识别技术的进步,但大多数用于高分辨率RSSOR的基于CNN的方法要么仅使用最后一层的特征图,要么以“求和”方式直接融合各层的特征图,这不仅忽略了相邻层之间的良好关系信息,还导致特征图冗余和丢失,阻碍了识别精度的提高。在本研究中,提出了一种基于上下文关系注意力的识别网络(CRABR-Net),该网络从CNN中提取不同的卷积特征图,通过使用简单的无参数注意力模块(SimAM)聚焦重要特征内容,利用互补关系特征图计算融合相邻特征图,通过增强关系特征图计算提高特征学习能力,最后使用不同层的拼接特征图进行RSSOR。实验结果表明,CRABR-Net利用不同CNN层之间的关系来提高识别性能,与几种先进算法相比取得了更好的结果,在通用训练比例下,在AID、UC-Merced和RSSCN7上的平均准确率分别可达96.46%、99.20%和95.43%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ee9d/10490739/228d17bfdf65/sensors-23-07514-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验