• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于具有跳跃池化和上下文信息融合的 Faster R-CNN 算法的目标检测。

Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information.

机构信息

Department of Mechanical Engineering, College of Field Engineering, Army Engineering University of PLA, Nanjing 210007, China.

出版信息

Sensors (Basel). 2020 Sep 25;20(19):5490. doi: 10.3390/s20195490.

DOI:10.3390/s20195490
PMID:32992739
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7582940/
Abstract

Deep learning is currently the mainstream method of object detection. Faster region-based convolutional neural network (Faster R-CNN) has a pivotal position in deep learning. It has impressive detection effects in ordinary scenes. However, under special conditions, there can still be unsatisfactory detection performance, such as the object having problems like occlusion, deformation, or small size. This paper proposes a novel and improved algorithm based on the Faster R-CNN framework combined with the Faster R-CNN algorithm with skip pooling and fusion of contextual information. This algorithm can improve the detection performance under special conditions on the basis of Faster R-CNN. The improvement mainly has three parts: The first part adds a context information feature extraction model after the conv5_3 of the convolutional layer; the second part adds skip pooling so that the former can fully obtain the contextual information of the object, especially for situations where the object is occluded and deformed; and the third part replaces the region proposal network (RPN) with a more efficient guided anchor RPN (GA-RPN), which can maintain the recall rate while improving the detection performance. The latter can obtain more detailed information from different feature layers of the deep neural network algorithm, and is especially aimed at scenes with small objects. Compared with Faster R-CNN, you only look once series (such as: YOLOv3), single shot detector (such as: SSD512), and other object detection algorithms, the algorithm proposed in this paper has an average improvement of 6.857% on the mean average precision (mAP) evaluation index while maintaining a certain recall rate. This strongly proves that the proposed method has higher detection rate and detection efficiency in this case.

摘要

深度学习是目前物体检测的主流方法。快速区域卷积神经网络(Faster R-CNN)在深度学习中占有重要地位。它在普通场景下具有令人印象深刻的检测效果。然而,在特殊情况下,仍然可能存在检测性能不理想的情况,例如物体存在遮挡、变形或尺寸较小等问题。

本文提出了一种基于 Faster R-CNN 框架的改进算法,该算法结合了具有跳层池化和上下文信息融合的 Faster R-CNN 算法。该算法可以在 Faster R-CNN 的基础上提高特殊条件下的检测性能。改进主要有三个部分:第一部分在卷积层的 conv5_3 后添加了一个上下文信息特征提取模型;第二部分添加了跳层池化,使前者能够充分获取物体的上下文信息,特别是对于物体被遮挡和变形的情况;第三部分用更高效的引导锚点 RPN(GA-RPN)替换了区域建议网络(RPN),可以在保持召回率的同时提高检测性能。后者可以从深度神经网络算法的不同特征层获取更详细的信息,特别针对小物体的场景。

与 Faster R-CNN、只看一次系列(如:YOLOv3)、单射检测器(如:SSD512)等物体检测算法相比,本文提出的算法在保持一定召回率的情况下,在平均精度(mAP)评估指标上平均提高了 6.857%。这有力地证明了该方法在这种情况下具有更高的检测率和检测效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/a8ca89e5a1bb/sensors-20-05490-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/8c1df831b69b/sensors-20-05490-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/abb4490c5a3d/sensors-20-05490-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/db5fcae54504/sensors-20-05490-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/c22cf47737f5/sensors-20-05490-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/18f33582136a/sensors-20-05490-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/0dad5a078e85/sensors-20-05490-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/29b3cfdbcd1d/sensors-20-05490-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/a8ca89e5a1bb/sensors-20-05490-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/8c1df831b69b/sensors-20-05490-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/abb4490c5a3d/sensors-20-05490-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/db5fcae54504/sensors-20-05490-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/c22cf47737f5/sensors-20-05490-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/18f33582136a/sensors-20-05490-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/0dad5a078e85/sensors-20-05490-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/29b3cfdbcd1d/sensors-20-05490-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a079/7582940/a8ca89e5a1bb/sensors-20-05490-g008.jpg

相似文献

1
Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information.基于具有跳跃池化和上下文信息融合的 Faster R-CNN 算法的目标检测。
Sensors (Basel). 2020 Sep 25;20(19):5490. doi: 10.3390/s20195490.
2
An improved faster R-CNN algorithm for assisted detection of lung nodules.一种改进的更快的 R-CNN 算法,用于辅助肺结节检测。
Comput Biol Med. 2023 Feb;153:106470. doi: 10.1016/j.compbiomed.2022.106470. Epub 2022 Dec 28.
3
Anchor Generation Optimization and Region of Interest Assignment for Vehicle Detection.锚生成优化和感兴趣区域分配在车辆检测中的应用。
Sensors (Basel). 2019 Mar 3;19(5):1089. doi: 10.3390/s19051089.
4
Research on Object Detection of PCB Assembly Scene Based on Effective Receptive Field Anchor Allocation.基于有效感受野锚分配的 PCB 装配场景目标检测研究。
Comput Intell Neurosci. 2022 Feb 14;2022:7536711. doi: 10.1155/2022/7536711. eCollection 2022.
5
Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.边缘保持和多尺度上下文神经网络的显著目标检测。
IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.
6
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
7
Infrared image target detection for substation electrical equipment based on improved faster region-based convolutional neural network algorithm.基于改进的快速区域卷积神经网络算法的变电站电气设备红外图像目标检测
Rev Sci Instrum. 2024 Apr 1;95(4). doi: 10.1063/5.0200826.
8
Region Based CNN for Foreign Object Debris Detection on Airfield Pavement.基于区域的卷积神经网络用于机场跑道异物碎片检测
Sensors (Basel). 2018 Mar 1;18(3):737. doi: 10.3390/s18030737.
9
An Enhanced Region Proposal Network for object detection using deep learning method.基于深度学习的目标检测增强型区域提案网络。
PLoS One. 2018 Sep 20;13(9):e0203897. doi: 10.1371/journal.pone.0203897. eCollection 2018.
10
Improved Faster R-CNN Traffic Sign Detection Based on a Second Region of Interest and Highly Possible Regions Proposal Network.基于第二感兴趣区域和高可能性区域提议网络的改进型更快区域卷积神经网络交通标志检测
Sensors (Basel). 2019 May 17;19(10):2288. doi: 10.3390/s19102288.

引用本文的文献

1
A recognition model for winter peach fruits based on improved ResNet and multi-scale feature fusion.一种基于改进型残差网络(ResNet)和多尺度特征融合的冬桃果实识别模型。
Front Plant Sci. 2025 Apr 9;16:1545216. doi: 10.3389/fpls.2025.1545216. eCollection 2025.
2
A Comprehensive Survey of Machine Learning Techniques and Models for Object Detection.目标检测的机器学习技术与模型综合调查
Sensors (Basel). 2025 Jan 2;25(1):214. doi: 10.3390/s25010214.
3
Vehicle Target Detection of Autonomous Driving Vehicles in Foggy Environments Based on an Improved YOLOX Network.

本文引用的文献

1
High-Quality Proposals for Weakly Supervised Object Detection.用于弱监督目标检测的高质量建议
IEEE Trans Image Process. 2020 Apr 16. doi: 10.1109/TIP.2020.2987161.
2
Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection.学习旋转不变和 Fisher 判别卷积神经网络进行目标检测。
IEEE Trans Image Process. 2019 Jan;28(1):265-278. doi: 10.1109/TIP.2018.2867198.
3
Weakly Supervised Object Detection via Object-Specific Pixel Gradient.基于特定对象像素梯度的弱监督目标检测
基于改进YOLOX网络的雾天环境下自动驾驶车辆的车辆目标检测
Sensors (Basel). 2025 Jan 1;25(1):194. doi: 10.3390/s25010194.
4
MwdpNet: towards improving the recognition accuracy of tiny targets in high-resolution remote sensing image.MwdpNet:致力于提高高分辨率遥感图像中微小目标的识别准确率
Sci Rep. 2023 Aug 24;13(1):13890. doi: 10.1038/s41598-023-41021-8.
5
A Survey of Convolutional Neural Network in Breast Cancer.乳腺癌中卷积神经网络的综述
Comput Model Eng Sci. 2023 Mar 9;136(3):2127-2172. doi: 10.32604/cmes.2023.025484.
6
Analytical Model of Action Fusion in Sports Tennis Teaching by Convolutional Neural Networks.卷积神经网络在体育网球教学中动作融合的分析模型。
Comput Intell Neurosci. 2022 Jul 31;2022:7835241. doi: 10.1155/2022/7835241. eCollection 2022.
7
Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN.使用可变形和全局上下文感知的 Faster RCNN-FPN 检测全幻灯片图像中的宫颈癌细胞。
Curr Oncol. 2021 Sep 16;28(5):3585-3601. doi: 10.3390/curroncol28050307.
IEEE Trans Neural Netw Learn Syst. 2018 Dec;29(12):5960-5970. doi: 10.1109/TNNLS.2018.2816021. Epub 2018 Apr 9.
4
Deeply Supervised Salient Object Detection with Short Connections.基于短连接的深度监督显著目标检测
IEEE Trans Pattern Anal Mach Intell. 2019 Apr;41(4):815-828. doi: 10.1109/TPAMI.2018.2815688. Epub 2018 Mar 14.
5
Exploring Weakly Labeled Images for Video Object Segmentation With Submodular Proposal Selection.基于子模提案选择的视频对象分割中弱标注图像的探索。
IEEE Trans Image Process. 2018 Sep;27(9):4245-4259. doi: 10.1109/TIP.2018.2806995.
6
Robust Small Target Co-Detection from Airborne Infrared Image Sequences.从机载红外图像序列中进行稳健的小目标协同检测
Sensors (Basel). 2017 Sep 29;17(10):2242. doi: 10.3390/s17102242.
7
Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.边缘保持和多尺度上下文神经网络的显著目标检测。
IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.
8
Bayesian saliency via low and mid level cues.基于低、中层次线索的贝叶斯显著性。
IEEE Trans Image Process. 2013 May;22(5):1689-98. doi: 10.1109/TIP.2012.2216276. Epub 2012 Aug 30.
9
Pedestrian detection: an evaluation of the state of the art.行人检测:现状评估。
IEEE Trans Pattern Anal Mach Intell. 2012 Apr;34(4):743-61. doi: 10.1109/TPAMI.2011.155.
10
Advanced Hough transform using a multilayer fractional Fourier method.基于多层分数阶傅里叶方法的改进 Hough 变换。
IEEE Trans Image Process. 2010 Jun;19(6):1558-66. doi: 10.1109/TIP.2010.2042102. Epub 2010 Feb 8.