• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于分析从印度道路获取的视频数据的目标检测与分类框架。

Object Detection and Classification Framework for Analysis of Video Data Acquired from Indian Roads.

作者信息

Padia Aayushi, T N Aryan, Thummagunti Sharan, Sharma Vivaan, K Vanahalli Manjunath, B M Prabhu Prasad, G N Girish, Kim Yong-Guk, B N Pavan Kumar

机构信息

Department of DSAI, Indian Institute of Information Technology, Dharwad 580009, India.

Department of CSE, Indian Institute of Information Technology, Dharwad 580009, India.

出版信息

Sensors (Basel). 2024 Sep 29;24(19):6319. doi: 10.3390/s24196319.

DOI:10.3390/s24196319
PMID:39409360
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11479008/
Abstract

Object detection and classification in autonomous vehicles are crucial for ensuring safe and efficient navigation through complex environments. This paper addresses the need for robust detection and classification algorithms tailored specifically for Indian roads, which present unique challenges such as diverse traffic patterns, erratic driving behaviors, and varied weather conditions. Despite significant progress in object detection and classification for autonomous vehicles, existing methods often struggle to generalize effectively to the conditions encountered on Indian roads. This paper proposes a novel approach utilizing the YOLOv8 deep learning model, designed to be lightweight, scalable, and efficient for real-time implementation using onboard cameras. Experimental evaluations were conducted using real-life scenarios encompassing diverse weather and traffic conditions. Videos captured in various environments were utilized to assess the model's performance, with particular emphasis on its accuracy and precision across 35 distinct object classes. The experiments demonstrate a precision of 0.65 for the detection of multiple classes, indicating the model's efficacy in handling a wide range of objects. Moreover, real-time testing revealed an average accuracy exceeding 70% across all scenarios, with a peak accuracy of 95% achieved in optimal conditions. The parameters considered in the evaluation process encompassed not only traditional metrics but also factors pertinent to Indian road conditions, such as low lighting, occlusions, and unpredictable traffic patterns. The proposed method exhibits superiority over existing approaches by offering a balanced trade-off between model complexity and performance. By leveraging the YOLOv8 architecture, this solution achieved high accuracy while minimizing computational resources, making it well suited for deployment in autonomous vehicles operating on Indian roads.

摘要

自动驾驶车辆中的目标检测和分类对于确保在复杂环境中安全高效地导航至关重要。本文针对专门为印度道路量身定制的强大检测和分类算法的需求展开探讨,印度道路存在诸如多样的交通模式、不稳定的驾驶行为以及多变的天气条件等独特挑战。尽管自动驾驶车辆的目标检测和分类取得了显著进展,但现有方法往往难以有效地推广到印度道路所遇到的条件。本文提出了一种利用YOLOv8深度学习模型的新颖方法,该模型设计为轻量级、可扩展且高效,以便使用车载摄像头进行实时实施。使用涵盖不同天气和交通条件的实际场景进行了实验评估。利用在各种环境中拍摄的视频来评估模型的性能,特别强调其在35个不同目标类别上的准确性和精确性。实验表明,检测多个类别的精确率为0.65,表明该模型在处理各种物体方面的有效性。此外,实时测试显示,在所有场景中平均准确率超过70%,在最佳条件下达到了95%的峰值准确率。评估过程中考虑的参数不仅包括传统指标,还包括与印度道路条件相关的因素,如低光照、遮挡和不可预测的交通模式。所提出的方法通过在模型复杂性和性能之间提供平衡的权衡,展现出优于现有方法的优势。通过利用YOLOv8架构,该解决方案在最小化计算资源的同时实现了高精度,使其非常适合部署在印度道路上运行的自动驾驶车辆中。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/0a0cb5d02976/sensors-24-06319-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/c9e205e8eb66/sensors-24-06319-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/8046aafe46ad/sensors-24-06319-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1ad34cc7d21e/sensors-24-06319-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/fbcf337da362/sensors-24-06319-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/eafc2f9837dd/sensors-24-06319-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/63146cf45785/sensors-24-06319-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/f1ccc514dc84/sensors-24-06319-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/6aadffb00d76/sensors-24-06319-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1cec26eda2d8/sensors-24-06319-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/2008784b2bf6/sensors-24-06319-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/fcc08a700f6f/sensors-24-06319-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/cdf2e18604fc/sensors-24-06319-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/2261f3beefe7/sensors-24-06319-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/40b042c45762/sensors-24-06319-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1c4e3515e96b/sensors-24-06319-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/0a0cb5d02976/sensors-24-06319-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/c9e205e8eb66/sensors-24-06319-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/8046aafe46ad/sensors-24-06319-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1ad34cc7d21e/sensors-24-06319-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/fbcf337da362/sensors-24-06319-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/eafc2f9837dd/sensors-24-06319-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/63146cf45785/sensors-24-06319-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/f1ccc514dc84/sensors-24-06319-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/6aadffb00d76/sensors-24-06319-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1cec26eda2d8/sensors-24-06319-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/2008784b2bf6/sensors-24-06319-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/fcc08a700f6f/sensors-24-06319-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/cdf2e18604fc/sensors-24-06319-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/2261f3beefe7/sensors-24-06319-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/40b042c45762/sensors-24-06319-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/1c4e3515e96b/sensors-24-06319-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/95e8/11479008/0a0cb5d02976/sensors-24-06319-g016.jpg

相似文献

1
Object Detection and Classification Framework for Analysis of Video Data Acquired from Indian Roads.用于分析从印度道路获取的视频数据的目标检测与分类框架。
Sensors (Basel). 2024 Sep 29;24(19):6319. doi: 10.3390/s24196319.
2
Optimized Design of EdgeBoard Intelligent Vehicle Based on PP-YOLOE.基于PP-YOLOE的EdgeBoard智能车辆优化设计
Sensors (Basel). 2024 May 16;24(10):3180. doi: 10.3390/s24103180.
3
Innovative road distress detection (IR-DD): an efficient and scalable deep learning approach.创新型道路病害检测(IR-DD):一种高效且可扩展的深度学习方法。
PeerJ Comput Sci. 2024 May 20;10:e2038. doi: 10.7717/peerj-cs.2038. eCollection 2024.
4
SOD-YOLOv8-Enhancing YOLOv8 for Small Object Detection in Aerial Imagery and Traffic Scenes.SOD-YOLOv8——增强YOLOv8以用于航空图像和交通场景中的小目标检测
Sensors (Basel). 2024 Sep 25;24(19):6209. doi: 10.3390/s24196209.
5
Synchronous End-to-End Vehicle Pedestrian Detection Algorithm Based on Improved YOLOv8 in Complex Scenarios.基于改进YOLOv8的复杂场景同步端到端车辆行人检测算法
Sensors (Basel). 2024 Sep 22;24(18):6116. doi: 10.3390/s24186116.
6
Autonomous Crack Detection for Mountainous Roads Using UAV Inspection System.基于无人机检测系统的山区道路自动裂缝检测
Sensors (Basel). 2024 Jul 22;24(14):4751. doi: 10.3390/s24144751.
7
Lightweight Object Detection Ensemble Framework for Autonomous Vehicles in Challenging Weather Conditions.轻量级目标检测集成框架,用于在挑战性天气条件下的自动驾驶车辆。
Comput Intell Neurosci. 2021 Oct 7;2021:5278820. doi: 10.1155/2021/5278820. eCollection 2021.
8
Explainable AI in Scene Understanding for Autonomous Vehicles in Unstructured Traffic Environments on Indian Roads Using the Inception U-Net Model with Grad-CAM Visualization.使用带有 Grad-CAM 可视化的 Inception U-Net 模型,在印度道路的非结构化交通环境中,解释自动驾驶车辆场景理解中的人工智能。
Sensors (Basel). 2022 Dec 10;22(24):9677. doi: 10.3390/s22249677.
9
Effective lane detection on complex roads with convolutional attention mechanism in autonomous vehicles.自动驾驶车辆中基于卷积注意力机制的复杂道路有效车道检测
Sci Rep. 2024 Aug 19;14(1):19193. doi: 10.1038/s41598-024-70116-z.
10
Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking.自动驾驶车辆中的传感器融合:交通监测摄像系统中的检测、定位和人工智能网络。
Sensors (Basel). 2023 Mar 22;23(6):3335. doi: 10.3390/s23063335.

本文引用的文献

1
Object Detection in Adverse Weather for Autonomous Driving through Data Merging and YOLOv8.通过数据融合和YOLOv8实现自动驾驶在恶劣天气下的目标检测
Sensors (Basel). 2023 Oct 14;23(20):8471. doi: 10.3390/s23208471.
2
BL-YOLOv8: An Improved Road Defect Detection Model Based on YOLOv8.BL-YOLOv8:一种基于YOLOv8的改进型道路缺陷检测模型。
Sensors (Basel). 2023 Oct 10;23(20):8361. doi: 10.3390/s23208361.
3
DATS_2022: A versatile indian dataset for object detection in unstructured traffic conditions.DATS_2022:一个适用于非结构化交通场景中目标检测的通用印度数据集。
Data Brief. 2022 Jul 14;43:108470. doi: 10.1016/j.dib.2022.108470. eCollection 2022 Aug.
4
Detection and Tracking Meet Drones Challenge.检测与跟踪遭遇无人机挑战。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7380-7399. doi: 10.1109/TPAMI.2021.3119563. Epub 2022 Oct 4.
5
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.