• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于光学遥感图像显著目标检测的相邻上下文协调网络

Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images.

作者信息

Li Gongyang, Liu Zhi, Zeng Dan, Lin Weisi, Ling Haibin

出版信息

IEEE Trans Cybern. 2023 Jan;53(1):526-538. doi: 10.1109/TCYB.2022.3162945. Epub 2022 Dec 23.

DOI:10.1109/TCYB.2022.3162945
PMID:35417367
Abstract

Salient object detection (SOD) in optical remote sensing images (RSIs), or RSI-SOD, is an emerging topic in understanding optical RSIs. However, due to the difference between optical RSIs and natural scene images (NSIs), directly applying NSI-SOD methods to optical RSIs fails to achieve satisfactory results. In this article, we propose a novel adjacent context coordination network (ACCoNet) to explore the coordination of adjacent features in an encoder-decoder architecture for RSI-SOD. Specifically, ACCoNet consists of three parts: 1) an encoder; 2) adjacent context coordination modules (ACCoMs); and 3) a decoder. As the key component of ACCoNet, ACCoM activates the salient regions of output features of the encoder and transmits them to the decoder. ACCoM contains a local branch and two adjacent branches to coordinate the multilevel features simultaneously. The local branch highlights the salient regions in an adaptive way, while the adjacent branches introduce global information of adjacent levels to enhance salient regions. In addition, to extend the capabilities of the classic decoder block (i.e., several cascaded convolutional layers), we extend it with two bifurcations and propose a bifurcation-aggregation block (BAB) to capture the contextual information in the decoder. Extensive experiments on two benchmark datasets demonstrate that the proposed ACCoNet outperforms 22 state-of-the-art methods under nine evaluation metrics, and runs up to 81 fps on a single NVIDIA Titan X GPU. The code and results of our method are available at https://github.com/MathLee/ACCoNet.

摘要

光学遥感图像(RSI)中的显著目标检测(SOD),即RSI - SOD,是理解光学遥感图像方面的一个新兴课题。然而,由于光学遥感图像与自然场景图像(NSI)之间存在差异,直接将NSI - SOD方法应用于光学遥感图像无法取得令人满意的结果。在本文中,我们提出了一种新颖的相邻上下文协调网络(ACCoNet),用于在编码器 - 解码器架构中探索相邻特征的协调,以实现RSI - SOD。具体而言,ACCoNet由三部分组成:1)一个编码器;2)相邻上下文协调模块(ACCoM);3)一个解码器。作为ACCoNet的关键组件,ACCoM激活编码器输出特征的显著区域并将其传输到解码器。ACCoM包含一个局部分支和两个相邻分支,以同时协调多级特征。局部分支以自适应方式突出显著区域,而相邻分支引入相邻层级的全局信息以增强显著区域。此外,为了扩展经典解码器模块(即几个级联卷积层)的能力,我们用两个分支对其进行扩展,并提出了一个分支聚合模块(BAB)来捕获解码器中的上下文信息。在两个基准数据集上进行的大量实验表明,所提出的ACCoNet在九个评估指标下优于22种先进方法,并且在单个NVIDIA Titan X GPU上运行速度高达81帧每秒。我们方法的代码和结果可在https://github.com/MathLee/ACCoNet获取。

相似文献

1
Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images.用于光学遥感图像显著目标检测的相邻上下文协调网络
IEEE Trans Cybern. 2023 Jan;53(1):526-538. doi: 10.1109/TCYB.2022.3162945. Epub 2022 Dec 23.
2
Dense Attention Fluid Network for Salient Object Detection in Optical Remote Sensing Images.用于光学遥感图像中显著目标检测的密集注意力流体网络
IEEE Trans Image Process. 2021;30:1305-1317. doi: 10.1109/TIP.2020.3042084. Epub 2020 Dec 23.
3
Edge-Guided Recurrent Positioning Network for Salient Object Detection in Optical Remote Sensing Images.用于光学遥感图像显著目标检测的边缘引导循环定位网络
IEEE Trans Cybern. 2023 Jan;53(1):539-552. doi: 10.1109/TCYB.2022.3163152. Epub 2022 Dec 23.
4
Salient Object Detection in Optical Remote Sensing Images Driven by Transformer.基于Transformer的光学遥感图像显著目标检测
IEEE Trans Image Process. 2023;32:5257-5269. doi: 10.1109/TIP.2023.3314285. Epub 2023 Sep 22.
5
3-D Convolutional Neural Networks for RGB-D Salient Object Detection and Beyond.用于RGB-D显著目标检测及其他应用的3D卷积神经网络
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):4309-4323. doi: 10.1109/TNNLS.2022.3202241. Epub 2024 Feb 29.
6
Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection.用于RGB-D显著目标检测的分层交替交互网络
IEEE Trans Image Process. 2021;30:3528-3542. doi: 10.1109/TIP.2021.3062689. Epub 2021 Mar 11.
7
Adaptive adjacent context negotiation network for object detection in remote sensing imagery.用于遥感影像目标检测的自适应相邻上下文协商网络
PeerJ Comput Sci. 2024 Jul 29;10:e2199. doi: 10.7717/peerj-cs.2199. eCollection 2024.
8
Decomposition and Completion Network for Salient Object Detection.用于显著目标检测的分解与完成网络
IEEE Trans Image Process. 2021;30:6226-6239. doi: 10.1109/TIP.2021.3093380. Epub 2021 Jul 12.
9
Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images.用于显著目标检测的对齐集成网络及其在光学遥感图像中的应用
Sensors (Basel). 2023 Jul 20;23(14):6562. doi: 10.3390/s23146562.
10
Motion-Aware Memory Network for Fast Video Salient Object Detection.用于快速视频显著目标检测的运动感知记忆网络
IEEE Trans Image Process. 2024;33:709-721. doi: 10.1109/TIP.2023.3348659. Epub 2024 Jan 12.

引用本文的文献

1
Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11.使用ELS-YOLO在低光照条件下增强无人机目标检测:一种基于改进YOLOv11的轻量级模型。
Sensors (Basel). 2025 Jul 17;25(14):4463. doi: 10.3390/s25144463.
2
MCFNet: Multi-Scale Contextual Fusion Network for Salient Object Detection in Optical Remote Sensing Images.MCFNet:用于光学遥感图像显著目标检测的多尺度上下文融合网络
Sensors (Basel). 2025 May 12;25(10):3035. doi: 10.3390/s25103035.
3
A multi-scene deep learning model for automated segmentation of acute vertebral compression fractures from radiographs: a multicenter cohort study.
一种用于从X线片自动分割急性椎体压缩骨折的多场景深度学习模型:一项多中心队列研究。
Insights Imaging. 2024 Dec 2;15(1):290. doi: 10.1186/s13244-024-01861-y.
4
Global Semantic-Sense Aggregation Network for Salient Object Detection in Remote Sensing Images.用于遥感图像显著目标检测的全局语义感知聚合网络
Entropy (Basel). 2024 May 25;26(6):445. doi: 10.3390/e26060445.
5
Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images.用于显著目标检测的对齐集成网络及其在光学遥感图像中的应用
Sensors (Basel). 2023 Jul 20;23(14):6562. doi: 10.3390/s23146562.