• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MAGNet:一种模拟放大镜观察效果的伪装目标检测网络。

MAGNet: A Camouflaged Object Detection Network Simulating the Observation Effect of a Magnifier.

作者信息

Jiang Xinhao, Cai Wei, Zhang Zhili, Jiang Bo, Yang Zhiyong, Wang Xin

机构信息

Xi'an Research Institute of High Technology, Xi'an 710064, China.

出版信息

Entropy (Basel). 2022 Dec 9;24(12):1804. doi: 10.3390/e24121804.

DOI:10.3390/e24121804
PMID:36554209
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9778132/
Abstract

In recent years, protecting important objects by simulating animal camouflage has been widely employed in many fields. Therefore, camouflaged object detection (COD) technology has emerged. COD is more difficult to achieve than traditional object detection techniques due to the high degree of fusion of objects camouflaged with the background. In this paper, we strive to more accurately and efficiently identify camouflaged objects. Inspired by the use of magnifiers to search for hidden objects in pictures, we propose a COD network that simulates the observation effect of a magnifier called the MAGnifier Network (MAGNet). Specifically, our MAGNet contains two parallel modules: the ergodic magnification module (EMM) and the attention focus module (AFM). The EMM is designed to mimic the process of a magnifier enlarging an image, and AFM is used to simulate the observation process in which human attention is highly focused on a particular region. The two sets of output camouflaged object maps were merged to simulate the observation of an object by a magnifier. In addition, a weighted key point area perception loss function, which is more applicable to COD, was designed based on two modules to give greater attention to the camouflaged object. Extensive experiments demonstrate that compared with 19 cutting-edge detection models, MAGNet can achieve the best comprehensive effect on eight evaluation metrics in the public COD dataset. Additionally, compared to other COD methods, MAGNet has lower computational complexity and faster segmentation. We also validated the model's generalization ability on a military camouflaged object dataset constructed in-house. Finally, we experimentally explored some extended applications of COD.

摘要

近年来,通过模拟动物伪装来保护重要物体的方法已在许多领域得到广泛应用。因此,伪装目标检测(COD)技术应运而生。由于伪装物体与背景高度融合,COD比传统目标检测技术更难实现。在本文中,我们致力于更准确、高效地识别伪装物体。受使用放大镜在图片中寻找隐藏物体的启发,我们提出了一种COD网络,该网络模拟了放大镜的观察效果,称为放大镜网络(MAGNet)。具体来说,我们的MAGNet包含两个并行模块:遍历放大模块(EMM)和注意力聚焦模块(AFM)。EMM旨在模仿放大镜放大图像的过程,AFM用于模拟人类注意力高度集中在特定区域的观察过程。将两组输出的伪装目标图合并,以模拟放大镜对物体的观察。此外,基于这两个模块设计了一种更适用于COD的加权关键点区域感知损失函数,以更关注伪装物体。大量实验表明,与19种前沿检测模型相比,MAGNet在公共COD数据集中的八项评估指标上能取得最佳综合效果。此外,与其他COD方法相比,MAGNet具有更低的计算复杂度和更快的分割速度。我们还在内部构建的军事伪装目标数据集上验证了该模型的泛化能力。最后,我们通过实验探索了COD的一些扩展应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/8ee598228e17/entropy-24-01804-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/f70033ae41f9/entropy-24-01804-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/895462cc53a0/entropy-24-01804-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/17afde4f3a12/entropy-24-01804-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/639f44c1418f/entropy-24-01804-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/0ed249ec1bd7/entropy-24-01804-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/2aca0001ac86/entropy-24-01804-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/e18f7be49b66/entropy-24-01804-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/b4a8b0c51cec/entropy-24-01804-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/d3335d09872d/entropy-24-01804-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/fa5c52f5bff6/entropy-24-01804-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/d7b6793f615e/entropy-24-01804-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/3a5feca37ad5/entropy-24-01804-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/319ac8019bff/entropy-24-01804-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/1240d1eb9e3e/entropy-24-01804-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/15cf2a38f000/entropy-24-01804-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/8ee598228e17/entropy-24-01804-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/f70033ae41f9/entropy-24-01804-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/895462cc53a0/entropy-24-01804-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/17afde4f3a12/entropy-24-01804-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/639f44c1418f/entropy-24-01804-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/0ed249ec1bd7/entropy-24-01804-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/2aca0001ac86/entropy-24-01804-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/e18f7be49b66/entropy-24-01804-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/b4a8b0c51cec/entropy-24-01804-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/d3335d09872d/entropy-24-01804-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/fa5c52f5bff6/entropy-24-01804-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/d7b6793f615e/entropy-24-01804-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/3a5feca37ad5/entropy-24-01804-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/319ac8019bff/entropy-24-01804-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/1240d1eb9e3e/entropy-24-01804-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/15cf2a38f000/entropy-24-01804-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9276/9778132/8ee598228e17/entropy-24-01804-g016.jpg

相似文献

1
MAGNet: A Camouflaged Object Detection Network Simulating the Observation Effect of a Magnifier.MAGNet:一种模拟放大镜观察效果的伪装目标检测网络。
Entropy (Basel). 2022 Dec 9;24(12):1804. doi: 10.3390/e24121804.
2
Edge-Guided Camouflaged Object Detection via Multi-Level Feature Integration.基于多级特征融合的边缘引导伪装目标检测。
Sensors (Basel). 2023 Jun 21;23(13):5789. doi: 10.3390/s23135789.
3
Features Split and Aggregation Network for Camouflaged Object Detection.用于伪装目标检测的特征分割与聚合网络
J Imaging. 2024 Jan 18;10(1):0. doi: 10.3390/jimaging10010024.
4
Collaborative Camouflaged Object Detection: A Large-Scale Dataset and Benchmark.协作式伪装目标检测:一个大规模数据集与基准
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18470-18484. doi: 10.1109/TNNLS.2023.3317091. Epub 2024 Dec 2.
5
Zero-Shot Camouflaged Object Detection.零样本伪装物体检测
IEEE Trans Image Process. 2023;32:5126-5137. doi: 10.1109/TIP.2023.3308295. Epub 2023 Sep 12.
6
Guided multi-scale refinement network for camouflaged object detection.用于伪装目标检测的引导式多尺度细化网络。
Multimed Tools Appl. 2023;82(4):5785-5801. doi: 10.1007/s11042-022-13274-4. Epub 2022 Jul 30.
7
Camouflaged Object Segmentation Based on Matching-Recognition-Refinement Network.基于匹配-识别-细化网络的伪装目标分割
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15993-16007. doi: 10.1109/TNNLS.2023.3291595. Epub 2024 Oct 29.
8
Discriminative context-aware network for camouflaged object detection.用于伪装目标检测的判别式上下文感知网络。
Front Artif Intell. 2024 Mar 27;7:1347898. doi: 10.3389/frai.2024.1347898. eCollection 2024.
9
Nowhere to Disguise: Spot Camouflaged Objects via Saliency Attribute Transfer.无处遁形:基于显著属性迁移的伪装目标检测。
IEEE Trans Image Process. 2023;32:3108-3120. doi: 10.1109/TIP.2023.3277793. Epub 2023 Jun 2.
10
Feature Aggregation and Propagation Network for Camouflaged Object Detection.用于伪装目标检测的特征聚合与传播网络
IEEE Trans Image Process. 2022;31:7036-7047. doi: 10.1109/TIP.2022.3217695. Epub 2022 Nov 14.

引用本文的文献

1
An efficient camouflaged image segmentation with modified UNet and attention techniques.一种采用改进的U-Net和注意力技术的高效伪装图像分割方法。
Sci Rep. 2025 Jul 1;15(1):21086. doi: 10.1038/s41598-025-07571-9.
2
Features Split and Aggregation Network for Camouflaged Object Detection.用于伪装目标检测的特征分割与聚合网络
J Imaging. 2024 Jan 18;10(1):0. doi: 10.3390/jimaging10010024.

本文引用的文献

1
CaraNet: context axial reverse attention network for segmentation of small medical objects.CaraNet:用于小型医学对象分割的上下文轴向反向注意力网络
J Med Imaging (Bellingham). 2023 Jan;10(1):014005. doi: 10.1117/1.JMI.10.1.014005. Epub 2023 Feb 18.
2
Enhanced U-Net: A Feature Enhancement Network for Polyp Segmentation.增强型U-Net:一种用于息肉分割的特征增强网络。
Proc Int Robot Vis Conf. 2021 May;2021:181-188. doi: 10.1109/crv52889.2021.00032. Epub 2021 Jul 5.
3
Concealed Object Detection.隐藏物体检测。
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6024-6042. doi: 10.1109/TPAMI.2021.3085766. Epub 2022 Sep 14.
4
UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation.UNet++:重新设计跳过连接以利用图像分割中的多尺度特征。
IEEE Trans Med Imaging. 2020 Jun;39(6):1856-1867. doi: 10.1109/TMI.2019.2959609. Epub 2019 Dec 13.
5
Res2Net: A New Multi-Scale Backbone Architecture.Res2Net:一种新的多尺度骨干网络架构。
IEEE Trans Pattern Anal Mach Intell. 2021 Feb;43(2):652-662. doi: 10.1109/TPAMI.2019.2938758. Epub 2021 Jan 8.
6
Squeeze-and-Excitation Networks.挤压激励网络。
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.
7
Three-stream Attention-aware Network for RGB-D Salient Object Detection.用于RGB-D显著目标检测的三流注意力感知网络
IEEE Trans Image Process. 2019 Jan 7. doi: 10.1109/TIP.2019.2891104.
8
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.用于结肠镜图像内腔场景分割的基准
J Healthc Eng. 2017;2017:4037190. doi: 10.1155/2017/4037190. Epub 2017 Jul 26.
9
Animal camouflage: current issues and new perspectives.动物伪装:当前问题与新视角
Philos Trans R Soc Lond B Biol Sci. 2009 Feb 27;364(1516):423-7. doi: 10.1098/rstb.2008.0217.