文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Vehicle recognition pipeline via DeepSort on aerial image datasets.

作者信息

Hanzla Muhammad, Yusuf Muhammad Ovais, Al Mudawi Naif, Sadiq Touseef, Almujally Nouf Abdullah, Rahman Hameedur, Alazeb Abdulwahab, Algarni Asaad

机构信息

Faculty of Computing and AI, Air University, Islamabad, Pakistan.

Department of Computer Science, College of Computer Science and Information System, Najran University, Najran, Saudi Arabia.

出版信息

Front Neurorobot. 2024 Aug 16;18:1430155. doi: 10.3389/fnbot.2024.1430155. eCollection 2024.


DOI:10.3389/fnbot.2024.1430155
PMID:39220587
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11362136/
Abstract

INTRODUCTION: Unmanned aerial vehicles (UAVs) are widely used in various computer vision applications, especially in intelligent traffic monitoring, as they are agile and simplify operations while boosting efficiency. However, automating these procedures is still a significant challenge due to the difficulty of extracting foreground (vehicle) information from complex traffic scenes. METHODS: This paper presents a unique method for autonomous vehicle surveillance that uses FCM to segment aerial images. YOLOv8, which is known for its ability to detect tiny objects, is then used to detect vehicles. Additionally, a system that utilizes ORB features is employed to support vehicle recognition, assignment, and recovery across picture frames. Vehicle tracking is accomplished using DeepSORT, which elegantly combines Kalman filtering with deep learning to achieve precise results. RESULTS: Our proposed model demonstrates remarkable performance in vehicle identification and tracking with precision of 0.86 and 0.84 on the VEDAI and SRTID datasets, respectively, for vehicle detection. DISCUSSION: For vehicle tracking, the model achieves accuracies of 0.89 and 0.85 on the VEDAI and SRTID datasets, respectively.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/41e72feb915a/fnbot-18-1430155-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/754ee3452f1e/fnbot-18-1430155-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/d971a74dab0a/fnbot-18-1430155-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/01c3355780de/fnbot-18-1430155-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/fa8061228070/fnbot-18-1430155-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/6673e634baff/fnbot-18-1430155-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/07553c9fe4d9/fnbot-18-1430155-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/0b740d074dff/fnbot-18-1430155-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/601a7ef230db/fnbot-18-1430155-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/a7339e1e31b0/fnbot-18-1430155-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/e90c56f5180e/fnbot-18-1430155-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/41e72feb915a/fnbot-18-1430155-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/754ee3452f1e/fnbot-18-1430155-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/d971a74dab0a/fnbot-18-1430155-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/01c3355780de/fnbot-18-1430155-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/fa8061228070/fnbot-18-1430155-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/6673e634baff/fnbot-18-1430155-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/07553c9fe4d9/fnbot-18-1430155-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/0b740d074dff/fnbot-18-1430155-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/601a7ef230db/fnbot-18-1430155-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/a7339e1e31b0/fnbot-18-1430155-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/e90c56f5180e/fnbot-18-1430155-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f66/11362136/41e72feb915a/fnbot-18-1430155-g011.jpg

相似文献

[1]
Vehicle recognition pipeline via DeepSort on aerial image datasets.

Front Neurorobot. 2024-8-16

[2]
Target detection and classification via EfficientDet and CNN over unmanned aerial vehicles.

Front Neurorobot. 2024-8-30

[3]
A feature fusion deep-projection convolution neural network for vehicle detection in aerial images.

PLoS One. 2021

[4]
Online Learning-Based Hybrid Tracking Method for Unmanned Aerial Vehicles.

Sensors (Basel). 2023-3-20

[5]
[Intelligent identification of livestock, a source of infection, based on deep learning of unmanned aerial vehicle images].

Zhongguo Xue Xi Chong Bing Fang Zhi Za Zhi. 2023-5-10

[6]
Vehicle Counting Based on Vehicle Detection and Tracking from Aerial Videos.

Sensors (Basel). 2018-8-4

[7]
Multi-objective pedestrian tracking method based on YOLOv8 and improved DeepSORT.

Math Biosci Eng. 2024-1-3

[8]
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

Sensors (Basel). 2016-3-26

[9]
Flying foxes optimization with reinforcement learning for vehicle detection in UAV imagery.

Sci Rep. 2024-9-4

[10]
Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications.

Sensors (Basel). 2021-11-27

引用本文的文献

[1]
Target detection and classification via EfficientDet and CNN over unmanned aerial vehicles.

Front Neurorobot. 2024-8-30

本文引用的文献

[1]
Effects of impulse on prescribed-time synchronization of switching complex networks.

Neural Netw. 2024-6

[2]
UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios.

Sensors (Basel). 2023-8-15

[3]
Vision-based dirt distribution mapping using deep learning.

Sci Rep. 2023-8-6

[4]
RayMVSNet++: Learning Ray-Based 1D Implicit Fields for Accurate Multi-View Stereo.

IEEE Trans Pattern Anal Mach Intell. 2023-11

[5]
Multi-UUV Maneuvering Counter-Game for Dynamic Target Scenario Based on Fractional-Order Recurrent Neural Network.

IEEE Trans Cybern. 2023-6

[6]
Extendable Multiple Nodes Recurrent Tracking Framework With RTU+.

IEEE Trans Image Process. 2022

[7]
Multiple Traffic Target Tracking with Spatial-Temporal Affinity Network.

Comput Intell Neurosci. 2022

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索