• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Adaptive condition-aware high-dimensional decoupling remote sensing image object detection algorithm.

作者信息

Bai Chenshuai, Bai Xiaofeng, Wu Kaijun, Ye Yuanjie

机构信息

School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, China.

Department of Mechanical and Electrical Engineering, Guangzhou City Polytechnic, Guangzhou, 510000, China.

出版信息

Sci Rep. 2024 Aug 29;14(1):20090. doi: 10.1038/s41598-024-71001-5.

DOI:10.1038/s41598-024-71001-5
PMID:39209928
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11362310/
Abstract

Remote Sensing Image Object Detection (RSIOD) faces the challenges of multi-scale objects, dense overlap of objects and uneven data distribution in practical applications. In order to solve these problems, this paper proposes a YOLO-ACPHD RSIOD algorithm. The algorithm adopts Adaptive Condition Awareness Technology (ACAT), which can dynamically adjust the parameters of the convolution kernel, so as to adapt to the objects of different scales and positions. Compared with the traditional fixed convolution kernel, this dynamic adjustment can better adapt to the diversity of scale, direction and shape of the object, thus improving the accuracy and robustness of Object Detection (OD). In addition, a High-Dimensional Decoupling Technology (HDDT) is used to reduce the amount of calculation to 1/N by performing deep convolution on the input data and then performing spatial convolution on each channel. When dealing with large-scale Remote Sensing Image (RSI) data, this reduction in computation can significantly improve the efficiency of the algorithm and accelerate the speed of OD, so as to better adapt to the needs of practical application scenarios. Through the experimental verification of the RSOD RSI data set, the YOLO-ACPHD model in this paper shows very satisfactory performance. The F1 value reaches 0.99, the Precision value reaches 1, the Precision-Recall value reaches 0.994, the Recall value reaches 1, and the mAP value reaches 99.36 , which indicates that the model shows the highest level in the accuracy and comprehensiveness of OD.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/92d74dd3ac0b/41598_2024_71001_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/9fc4fee8fb07/41598_2024_71001_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/4b6927fe1036/41598_2024_71001_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/302b24ca526f/41598_2024_71001_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/d83c9541dff2/41598_2024_71001_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/dd3d47df0ac7/41598_2024_71001_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/aa65ee7460a6/41598_2024_71001_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/ab54e44ec6dd/41598_2024_71001_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/2f702b9b328a/41598_2024_71001_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/58253b40079c/41598_2024_71001_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/ca04d20cef7d/41598_2024_71001_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/e3dbb362630b/41598_2024_71001_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/4c93bd0dae98/41598_2024_71001_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/92d74dd3ac0b/41598_2024_71001_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/9fc4fee8fb07/41598_2024_71001_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/4b6927fe1036/41598_2024_71001_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/302b24ca526f/41598_2024_71001_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/d83c9541dff2/41598_2024_71001_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/dd3d47df0ac7/41598_2024_71001_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/aa65ee7460a6/41598_2024_71001_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/ab54e44ec6dd/41598_2024_71001_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/2f702b9b328a/41598_2024_71001_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/58253b40079c/41598_2024_71001_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/ca04d20cef7d/41598_2024_71001_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/e3dbb362630b/41598_2024_71001_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/4c93bd0dae98/41598_2024_71001_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d22/11362310/92d74dd3ac0b/41598_2024_71001_Fig13_HTML.jpg

相似文献

1
Adaptive condition-aware high-dimensional decoupling remote sensing image object detection algorithm.
Sci Rep. 2024 Aug 29;14(1):20090. doi: 10.1038/s41598-024-71001-5.
2
A Multi-Scale-Enhanced YOLO-V5 Model for Detecting Small Objects in Remote Sensing Image Information.一种用于遥感图像信息中小目标检测的多尺度增强YOLO-V5模型
Sensors (Basel). 2024 Jul 4;24(13):4347. doi: 10.3390/s24134347.
3
OD-YOLO: Robust Small Object Detection Model in Remote Sensing Image with a Novel Multi-Scale Feature Fusion.OD-YOLO:基于新型多尺度特征融合的遥感图像稳健小目标检测模型
Sensors (Basel). 2024 Jun 3;24(11):3596. doi: 10.3390/s24113596.
4
RSI-YOLO: Object Detection Method for Remote Sensing Images Based on Improved YOLO.RSI-YOLO:基于改进YOLO的遥感图像目标检测方法
Sensors (Basel). 2023 Jul 14;23(14):6414. doi: 10.3390/s23146414.
5
MSA-YOLO: A Remote Sensing Object Detection Model Based on Multi-Scale Strip Attention.MSA-YOLO:一种基于多尺度带状注意力的遥感目标检测模型。
Sensors (Basel). 2023 Jul 30;23(15):6811. doi: 10.3390/s23156811.
6
YOLO-Faster: An efficient remote sensing object detection method based on AMFFN.YOLO-Faster:一种基于AMFFN的高效遥感目标检测方法。
Sci Prog. 2024 Oct-Dec;107(4):368504241280765. doi: 10.1177/00368504241280765.
7
Fast and Accurate Object Detection in Remote Sensing Images Based on Lightweight Deep Neural Network.基于轻量级深度神经网络的遥感图像快速精确目标检测。
Sensors (Basel). 2021 Aug 13;21(16):5460. doi: 10.3390/s21165460.
8
Improved YOLO-V3 with DenseNet for Multi-Scale Remote Sensing Target Detection.基于密集连接网络的改进 YOLO-V3 算法用于多尺度遥感目标检测
Sensors (Basel). 2020 Jul 31;20(15):4276. doi: 10.3390/s20154276.
9
SEB-YOLO: An Improved YOLOv5 Model for Remote Sensing Small Target Detection.SEB-YOLO:一种用于遥感小目标检测的改进YOLOv5模型。
Sensors (Basel). 2024 Mar 29;24(7):2193. doi: 10.3390/s24072193.
10
Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target detection of remote sensing image.基于局部全卷积神经网络与 YOLO v5 算法的遥感图像小目标检测应用
PLoS One. 2021 Oct 29;16(10):e0259283. doi: 10.1371/journal.pone.0259283. eCollection 2021.

引用本文的文献

1
Enhancing cross view geo localization through global local quadrant interaction network.通过全局局部象限交互网络增强跨视图地理定位
Sci Rep. 2025 Sep 29;15(1):33431. doi: 10.1038/s41598-025-18935-6.

本文引用的文献

1
YOLOX target detection model can identify and classify several types of tea buds with similar characteristics.YOLOX 目标检测模型可以识别和分类几种具有相似特征的茶芽。
Sci Rep. 2024 Feb 3;14(1):2855. doi: 10.1038/s41598-024-53498-y.
2
SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation With Fine-Grained Geometry.
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8902-8919. doi: 10.1109/TPAMI.2023.3237577.
3
Health Status Recognition Method for Rotating Machinery Based on Multi-Scale Hybrid Features and Improved Convolutional Neural Networks.基于多尺度混合特征和改进卷积神经网络的旋转机械健康状态识别方法
Sensors (Basel). 2023 Jun 18;23(12):5688. doi: 10.3390/s23125688.
4
MSIA-Net: A Lightweight Infrared Target Detection Network with Efficient Information Fusion.MSIA-Net:一种具有高效信息融合的轻量级红外目标检测网络。
Entropy (Basel). 2023 May 17;25(5):808. doi: 10.3390/e25050808.