Suppr超能文献

RGDiNet:用于空地监视的具有更快 R-CNN 的高效机载目标检测

RGDiNet: Efficient Onboard Object Detection with Faster R-CNN for Air-to-Ground Surveillance.

机构信息

Department of Electrical Engineering, Soonchunhyang University, Asan 31538, Korea.

出版信息

Sensors (Basel). 2021 Mar 1;21(5):1677. doi: 10.3390/s21051677.

Abstract

An essential component for the autonomous flight or air-to-ground surveillance of a UAV is an object detection device. It must possess a high detection accuracy and requires real-time data processing to be employed for various tasks such as search and rescue, object tracking and disaster analysis. With the recent advancements in multimodal data-based object detection architectures, autonomous driving technology has significantly improved, and the latest algorithm has achieved an average precision of up to 96%. However, these remarkable advances may be unsuitable for the image processing of UAV aerial data directly onboard for object detection because of the following major problems: (1) Objects in aerial views generally have a smaller size than in an image and they are uneven and sparsely distributed throughout an image; (2) Objects are exposed to various environmental changes, such as occlusion and background interference; and (3) The payload weight of a UAV is limited. Thus, we propose employing a new real-time onboard object detection architecture, an RGB aerial image and a point cloud data (PCD) depth map image network (RGDiNet). A faster region-based convolutional neural network was used as the baseline detection network and an RGD, an integration of the RGB aerial image and the depth map reconstructed by the light detection and ranging PCD, was utilized as an input for computational efficiency. Performance tests and evaluation of the proposed RGDiNet were conducted under various operating conditions using hand-labeled aerial datasets. Consequently, it was shown that the proposed method has a superior performance for the detection of vehicles and pedestrians than conventional vision-based methods.

摘要

对于无人机的自主飞行或空地监视,目标检测设备是一个必不可少的组成部分。它必须具有高检测精度,并需要实时数据处理,以用于各种任务,如搜索和救援、目标跟踪和灾难分析。随着基于多模态数据的目标检测架构的最新进展,自动驾驶技术得到了显著提高,最新算法的平均精度高达 96%。然而,由于以下几个主要问题,这些显著的进展可能不适合直接在无人机上进行机载数据的图像处理,以进行目标检测:(1)空中视图中的物体通常比图像中的物体小,并且在图像中分布不均匀且稀疏;(2)物体受到各种环境变化的影响,例如遮挡和背景干扰;(3)无人机的有效载荷重量有限。因此,我们提出采用一种新的实时机载目标检测架构,即 RGB 航空图像和点云数据(PCD)深度图网络(RGDiNet)。基于更快的区域卷积神经网络作为基线检测网络,并将 RGB 航空图像和由光探测和测距 PCD 重建的深度图集成的 RGD 作为输入,以提高计算效率。使用手工标记的航空数据集,在各种工作条件下对所提出的 RGDiNet 进行了性能测试和评估。结果表明,与传统的基于视觉的方法相比,所提出的方法在车辆和行人检测方面具有更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5855/7957492/bbd414a96746/sensors-21-01677-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验