• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

RGDiNet:用于空地监视的具有更快 R-CNN 的高效机载目标检测

RGDiNet: Efficient Onboard Object Detection with Faster R-CNN for Air-to-Ground Surveillance.

机构信息

Department of Electrical Engineering, Soonchunhyang University, Asan 31538, Korea.

出版信息

Sensors (Basel). 2021 Mar 1;21(5):1677. doi: 10.3390/s21051677.

DOI:10.3390/s21051677
PMID:33804364
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7957492/
Abstract

An essential component for the autonomous flight or air-to-ground surveillance of a UAV is an object detection device. It must possess a high detection accuracy and requires real-time data processing to be employed for various tasks such as search and rescue, object tracking and disaster analysis. With the recent advancements in multimodal data-based object detection architectures, autonomous driving technology has significantly improved, and the latest algorithm has achieved an average precision of up to 96%. However, these remarkable advances may be unsuitable for the image processing of UAV aerial data directly onboard for object detection because of the following major problems: (1) Objects in aerial views generally have a smaller size than in an image and they are uneven and sparsely distributed throughout an image; (2) Objects are exposed to various environmental changes, such as occlusion and background interference; and (3) The payload weight of a UAV is limited. Thus, we propose employing a new real-time onboard object detection architecture, an RGB aerial image and a point cloud data (PCD) depth map image network (RGDiNet). A faster region-based convolutional neural network was used as the baseline detection network and an RGD, an integration of the RGB aerial image and the depth map reconstructed by the light detection and ranging PCD, was utilized as an input for computational efficiency. Performance tests and evaluation of the proposed RGDiNet were conducted under various operating conditions using hand-labeled aerial datasets. Consequently, it was shown that the proposed method has a superior performance for the detection of vehicles and pedestrians than conventional vision-based methods.

摘要

对于无人机的自主飞行或空地监视,目标检测设备是一个必不可少的组成部分。它必须具有高检测精度,并需要实时数据处理,以用于各种任务,如搜索和救援、目标跟踪和灾难分析。随着基于多模态数据的目标检测架构的最新进展,自动驾驶技术得到了显著提高,最新算法的平均精度高达 96%。然而,由于以下几个主要问题,这些显著的进展可能不适合直接在无人机上进行机载数据的图像处理,以进行目标检测:(1)空中视图中的物体通常比图像中的物体小,并且在图像中分布不均匀且稀疏;(2)物体受到各种环境变化的影响,例如遮挡和背景干扰;(3)无人机的有效载荷重量有限。因此,我们提出采用一种新的实时机载目标检测架构,即 RGB 航空图像和点云数据(PCD)深度图网络(RGDiNet)。基于更快的区域卷积神经网络作为基线检测网络,并将 RGB 航空图像和由光探测和测距 PCD 重建的深度图集成的 RGD 作为输入,以提高计算效率。使用手工标记的航空数据集,在各种工作条件下对所提出的 RGDiNet 进行了性能测试和评估。结果表明,与传统的基于视觉的方法相比,所提出的方法在车辆和行人检测方面具有更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/d79880d31980/sensors-21-01677-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/bbd414a96746/sensors-21-01677-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/6501039cf0b5/sensors-21-01677-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/8344d8f9e55c/sensors-21-01677-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/107357a11386/sensors-21-01677-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/d79880d31980/sensors-21-01677-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/bbd414a96746/sensors-21-01677-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/6501039cf0b5/sensors-21-01677-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/8344d8f9e55c/sensors-21-01677-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/107357a11386/sensors-21-01677-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/d79880d31980/sensors-21-01677-g005.jpg

相似文献

1
RGDiNet: Efficient Onboard Object Detection with Faster R-CNN for Air-to-Ground Surveillance.RGDiNet:用于空地监视的具有更快 R-CNN 的高效机载目标检测
Sensors (Basel). 2021 Mar 1;21(5):1677. doi: 10.3390/s21051677.
2
Using Deep Learning and Low-Cost RGB and Thermal Cameras to Detect Pedestrians in Aerial Images Captured by Multirotor UAV.利用深度学习以及低成本的 RGB 和热成像摄像机,检测多旋翼无人机航拍图像中的行人。
Sensors (Basel). 2018 Jul 12;18(7):2244. doi: 10.3390/s18072244.
3
Autonomous Vision-Based Aerial Grasping for Rotorcraft Unmanned Aerial Vehicles.基于视觉的旋翼机无人机自主空中抓取
Sensors (Basel). 2019 Aug 3;19(15):3410. doi: 10.3390/s19153410.
4
Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.边缘保持和多尺度上下文神经网络的显著目标检测。
IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.
5
Dynamic Object Tracking on Autonomous UAV System for Surveillance Applications.自主无人机系统上的动态目标跟踪用于监控应用。
Sensors (Basel). 2021 Nov 27;21(23):7888. doi: 10.3390/s21237888.
6
Real-Time Vehicle-Detection Method in Bird-View Unmanned-Aerial-Vehicle Imagery.基于俯视无人机影像的实时车辆检测方法
Sensors (Basel). 2019 Sep 13;19(18):3958. doi: 10.3390/s19183958.
7
Lightweight Detection Network Based on Sub-Pixel Convolution and Objectness-Aware Structure for UAV Images.基于子像素卷积和目标感知结构的无人机图像轻量级检测网络
Sensors (Basel). 2021 Aug 22;21(16):5656. doi: 10.3390/s21165656.
8
Deep Multimodal Detection in Reduced Visibility Using Thermal Depth Estimation for Autonomous Driving.使用热景深估计进行自主驾驶的低能见度下深度多模态检测。
Sensors (Basel). 2022 Jul 6;22(14):5084. doi: 10.3390/s22145084.
9
Enhancing UAV Visual Landing Recognition with YOLO's Object Detection by Onboard Edge Computing.通过机载边缘计算利用YOLO目标检测增强无人机视觉着陆识别
Sensors (Basel). 2023 Nov 6;23(21):8999. doi: 10.3390/s23218999.
10
Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information.基于具有跳跃池化和上下文信息融合的 Faster R-CNN 算法的目标检测。
Sensors (Basel). 2020 Sep 25;20(19):5490. doi: 10.3390/s20195490.

本文引用的文献

1
UAV-YOLO: Small Object Detection on Unmanned Aerial Vehicle Perspective.UAV-YOLO:无人机视角下的小目标检测。
Sensors (Basel). 2020 Apr 15;20(8):2238. doi: 10.3390/s20082238.
2
Drone Mission Definition and Implementation for Automated Infrastructure Inspection Using Airborne Sensors.使用机载传感器进行自动化基础设施检查的无人机任务定义与实施
Sensors (Basel). 2018 Apr 11;18(4):1170. doi: 10.3390/s18041170.
3
RGBD Salient Object Detection via Deep Fusion.基于深度融合的 RGBD 显著目标检测。
IEEE Trans Image Process. 2017 May;26(5):2274-2285. doi: 10.1109/TIP.2017.2682981. Epub 2017 Mar 15.
4
Statistical Hypothesis Detector for Abnormal Event Detection in Crowded Scenes.用于拥挤场景中异常事件检测的统计假设检测器。
IEEE Trans Cybern. 2017 Nov;47(11):3597-3608. doi: 10.1109/TCYB.2016.2572609. Epub 2016 Jun 13.
5
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
6
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.基于低分辨率无人机热成像的行人检测与跟踪
Sensors (Basel). 2016 Mar 26;16(4):446. doi: 10.3390/s16040446.
7
Spectral sensitivities of the human cones.人类视锥细胞的光谱敏感性。
J Opt Soc Am A Opt Image Sci Vis. 1993 Dec;10(12):2491-521. doi: 10.1364/josaa.10.002491.