Suppr超能文献

基于雷达-声谱图的卷积神经网络无人机分类。

Radar-Spectrogram-Based UAV Classification Using Convolutional Neural Networks.

机构信息

Department of Intelligence and Information, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea.

出版信息

Sensors (Basel). 2020 Dec 31;21(1):210. doi: 10.3390/s21010210.

Abstract

With the upsurge in the use of Unmanned Aerial Vehicles (UAVs) in various fields, detecting and identifying them in real-time are becoming important topics. However, the identification of UAVs is difficult due to their characteristics such as low altitude, slow speed, and small radar cross-section (LSS). With the existing deterministic approach, the algorithm becomes complex and requires a large number of computations, making it unsuitable for real-time systems. Hence, effective alternatives enabling real-time identification of these new threats are needed. Deep learning-based classification models learn features from data by themselves and have shown outstanding performance in computer vision tasks. In this paper, we propose a deep learning-based classification model that learns the micro-Doppler signatures (MDS) of targets represented on radar spectrogram images. To enable this, first, we recorded five LSS targets (three types of UAVs and two different types of human activities) with a frequency modulated continuous wave (FMCW) radar in various scenarios. Then, we converted signals into spectrograms in the form of images by Short time Fourier transform (STFT). After the data refinement and augmentation, we made our own radar spectrogram dataset. Secondly, we analyzed characteristics of the radar spectrogram dataset with the ResNet-18 model and designed the ResNet-SP model with less computation, higher accuracy and stability based on the ResNet-18 model. The results show that the proposed ResNet-SP has a training time of 242 s and an accuracy of 83.39%, which is superior to the ResNet-18 that takes 640 s for training with an accuracy of 79.88%.

摘要

随着无人机 (UAV) 在各领域的广泛应用,实时检测和识别它们变得越来越重要。然而,由于 UAV 具有低空、低速和小雷达散射截面 (LSS) 等特点,其识别变得困难。在现有的确定性方法中,算法变得复杂,需要大量的计算,因此不适合实时系统。因此,需要有效的替代方法来实时识别这些新的威胁。基于深度学习的分类模型通过自身从数据中学习特征,在计算机视觉任务中表现出优异的性能。在本文中,我们提出了一种基于深度学习的分类模型,该模型学习雷达谱图图像上表示的目标的微多普勒特征 (MDS)。为此,我们首先使用调频连续波 (FMCW) 雷达在各种场景下记录了五个 LSS 目标 (三种类型的无人机和两种不同类型的人类活动)。然后,我们通过短时傅里叶变换 (STFT) 将信号转换为图像形式的频谱图。在数据精炼和扩充之后,我们制作了自己的雷达频谱图数据集。其次,我们使用 ResNet-18 模型分析了雷达频谱图数据集的特征,并在 ResNet-18 模型的基础上设计了计算量更少、精度更高、稳定性更好的 ResNet-SP 模型。结果表明,所提出的 ResNet-SP 的训练时间为 242 秒,精度为 83.39%,优于训练时间为 640 秒、精度为 79.88%的 ResNet-18。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e0dc/7795548/cbf6e7727519/sensors-21-00210-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验