• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于DCYOLO的城市街道和道路拥堵环境下密集目标检测方法研究

Research on dense object detection methods in congested environments of urban streets and roads based on DCYOLO.

作者信息

Jiang Shuhai, Luo Bowen, Jiang Haoyue, Zhou Zhongkai, Sun Shangjie

机构信息

School of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing, 210037, Jiangsu, China.

Institute of Intelligent Control and Robotics (IICR), Nanjing Forestry University, Nanjing, 210037, Jiangsu, China.

出版信息

Sci Rep. 2024 Jan 11;14(1):1127. doi: 10.1038/s41598-024-51868-0.

DOI:10.1038/s41598-024-51868-0
PMID:38212436
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10784535/
Abstract

The urban street is a congested environment that contains a large number of occluded and size-differentiated objects. Aiming at the problems of the loss of the target to be detected and low detection accuracy resulting from this situation, a newly improved algorithm, based on YOLOv4, DCYOLO is proposed. Firstly, a Difference sensitive network (DSN) is introduced to extract the edge features of objects from the original image. Then, assign the edge features back to increase the edge intensity of the object in the original image and ultimately improve the detection performance. Secondly, the feature fusion module (CFFB) based on context information is introduced to realize the cross-scale fusion of shallow fine-grained features and deep-level features, to strengthen the cross-scale semantic information fusion of feature maps and eventually improve the performance of object detection. At last, in the network prediction part, the SIOU loss function replaces the original CIOU loss function to improve the convergence speed and accuracy of object detection. The experiments based on MS COCO 2017 and self-made datasets show that, compared with the YOLOv4, the detection accuracy of DCYOLO models is greatly improved with an increase of 9.1 percentage points in AP and 10.4 percentage points in AP. Compared with YOLOv5x and Faster R-CNN, DCYOLO shows higher accuracy and better detection performance. The experiment result proves that the DCYOLO algorithm can adapt to the dense object detection requirements in the congested environment of urban streets.

摘要

城市街道是一个拥挤的环境,包含大量被遮挡且大小各异的物体。针对这种情况导致的待检测目标丢失和检测精度低的问题,提出了一种基于YOLOv4的新改进算法DCYOLO。首先,引入差异敏感网络(DSN)从原始图像中提取物体的边缘特征。然后,将边缘特征反馈回去以增强原始图像中物体的边缘强度,最终提高检测性能。其次,引入基于上下文信息的特征融合模块(CFFB),实现浅层细粒度特征和深层特征的跨尺度融合,加强特征图的跨尺度语义信息融合,最终提高目标检测性能。最后,在网络预测部分,用SIOU损失函数取代原来的CIOU损失函数,以提高目标检测的收敛速度和精度。基于MS COCO 2017和自制数据集的实验表明,与YOLOv4相比,DCYOLO模型的检测精度有了很大提高,AP提高了9.1个百分点,AP提高了10.4个百分点。与YOLOv5x和Faster R-CNN相比,DCYOLO显示出更高的精度和更好的检测性能。实验结果证明,DCYOLO算法能够适应城市街道拥挤环境下的密集目标检测需求。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/5c0037970594/41598_2024_51868_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/d73e56d94a09/41598_2024_51868_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/c838fbae8e15/41598_2024_51868_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/bda671614a5a/41598_2024_51868_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/99e05fcbda63/41598_2024_51868_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/b53a623b5281/41598_2024_51868_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/04153995b67e/41598_2024_51868_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/54a73a8f8230/41598_2024_51868_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/a329a548be33/41598_2024_51868_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/bad387a5bf7b/41598_2024_51868_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/1114303b6788/41598_2024_51868_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/d0a49ff341c9/41598_2024_51868_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/719f4831ede8/41598_2024_51868_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/b95c0b0d7b2f/41598_2024_51868_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/6238fd0f0db3/41598_2024_51868_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/5c0037970594/41598_2024_51868_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/d73e56d94a09/41598_2024_51868_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/c838fbae8e15/41598_2024_51868_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/bda671614a5a/41598_2024_51868_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/99e05fcbda63/41598_2024_51868_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/b53a623b5281/41598_2024_51868_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/04153995b67e/41598_2024_51868_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/54a73a8f8230/41598_2024_51868_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/a329a548be33/41598_2024_51868_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/bad387a5bf7b/41598_2024_51868_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/1114303b6788/41598_2024_51868_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/d0a49ff341c9/41598_2024_51868_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/719f4831ede8/41598_2024_51868_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/b95c0b0d7b2f/41598_2024_51868_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/6238fd0f0db3/41598_2024_51868_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b484/10784535/5c0037970594/41598_2024_51868_Fig15_HTML.jpg

相似文献

1
Research on dense object detection methods in congested environments of urban streets and roads based on DCYOLO.基于DCYOLO的城市街道和道路拥堵环境下密集目标检测方法研究
Sci Rep. 2024 Jan 11;14(1):1127. doi: 10.1038/s41598-024-51868-0.
2
A novel algorithm for small object detection based on YOLOv4.一种基于YOLOv4的小目标检测新算法。
PeerJ Comput Sci. 2023 Mar 22;9:e1314. doi: 10.7717/peerj-cs.1314. eCollection 2023.
3
ssFPN: Scale Sequence () Feature-Based Feature Pyramid Network for Object Detection.ssFPN:基于尺度序列(Scale Sequence)特征的目标检测特征金字塔网络。
Sensors (Basel). 2023 Apr 30;23(9):4432. doi: 10.3390/s23094432.
4
Precision Detection of Dense Plums in Orchards Using the Improved YOLOv4 Model.基于改进YOLOv4模型的果园密集李子精确检测
Front Plant Sci. 2022 Mar 11;13:839269. doi: 10.3389/fpls.2022.839269. eCollection 2022.
5
Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information.基于具有跳跃池化和上下文信息融合的 Faster R-CNN 算法的目标检测。
Sensors (Basel). 2020 Sep 25;20(19):5490. doi: 10.3390/s20195490.
6
Multi-Object Detection Method in Construction Machinery Swarm Operations Based on the Improved YOLOv4 Model.基于改进YOLOv4模型的工程机械集群作业多目标检测方法
Sensors (Basel). 2022 Sep 26;22(19):7294. doi: 10.3390/s22197294.
7
Lightweight aerial image object detection algorithm based on improved YOLOv5s.基于改进 YOLOv5s 的轻量级空中图像目标检测算法。
Sci Rep. 2023 May 15;13(1):7817. doi: 10.1038/s41598-023-34892-4.
8
Traffic Lights Detection and Recognition Method Based on the Improved YOLOv4 Algorithm.基于改进 YOLOv4 算法的交通信号灯检测与识别方法。
Sensors (Basel). 2021 Dec 28;22(1):200. doi: 10.3390/s22010200.
9
An object detection algorithm combining self-attention and YOLOv4 in traffic scene.一种结合自注意力机制和 YOLOv4 的交通场景目标检测算法。
PLoS One. 2023 May 18;18(5):e0285654. doi: 10.1371/journal.pone.0285654. eCollection 2023.
10
FocusDet: an efficient object detector for small object.FocusDet:一种用于小目标的高效目标检测器。
Sci Rep. 2024 May 10;14(1):10697. doi: 10.1038/s41598-024-61136-w.

引用本文的文献

1
Dense object detection methods in RAW UAV imagery based on YOLOv8.基于YOLOv8的无人机原始图像密集目标检测方法
Sci Rep. 2024 Aug 4;14(1):18019. doi: 10.1038/s41598-024-69106-y.

本文引用的文献

1
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.