Tang Zeyu, Shen Hong, Lam Chan-Tong
Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China.
School of Engineering and Technology, Central Queensland University, Brisbane 4000, Australia.
Sensors (Basel). 2025 Mar 14;25(6):1809. doi: 10.3390/s25061809.
The increasing density and complexity of electromagnetic signals have brought new challenges to multi-component radar signal recognition. To address the problem of low recognition accuracy under low signal-to-noise ratios (SNR) in adapting the common recognition framework of combining time-frequency transformations (TFTs) with convolutional neural networks (CNNs), this paper proposes a new dual-component radar signal recognition framework (TFGM-RMNet) that combines a deep time-frequency generation module with a Transformer-based residual network. First, the received noisy signal is preprocessed. Then, the deep time-frequency generation module is used to learn the complete basis function to obtain various TF features of the time signal, and the corresponding time-frequency representation (TFR) is output under the supervision of high-quality images. Next, a ResNet combined with cascaded multi-head attention (MHSA) is applied to extract local and global features from the TFR. Finally, modulation format prediction is achieved through multi-label classification. The proposed framework does not require explicit TFT during testing, and the TFT process is built into TFGM to replace the traditional TFT. The classification results and ideal TFR are obtained during testing, realizing an end-to-end deep learning (DL) framework. The simulation results show that, when SNR > -8 dB, this method can achieve an average recognition accuracy close to 100%. It achieves 97% accuracy even at an SNR of -10 dB. At the same time, under low SNR, the recognition performance is better than the existing algorithms including DCNN-RAMIML, DCNN-MLL, and DCNN-MIML.
电磁信号密度和复杂度的不断增加给多分量雷达信号识别带来了新的挑战。为了解决在将时频变换(TFT)与卷积神经网络(CNN)相结合的通用识别框架中,低信噪比(SNR)下识别精度低的问题,本文提出了一种新的双分量雷达信号识别框架(TFGM-RMNet),该框架将深度时频生成模块与基于Transformer的残差网络相结合。首先,对接收到的噪声信号进行预处理。然后,利用深度时频生成模块学习完整的基函数,以获得时间信号的各种时频特征,并在高质量图像的监督下输出相应的时频表示(TFR)。接下来应用结合了级联多头注意力(MHSA)的ResNet从TFR中提取局部和全局特征。最后,通过多标签分类实现调制格式预测。所提出的框架在测试期间不需要显式的TFT,TFT过程被构建到TFGM中以取代传统的TFT。在测试期间获得分类结果和理想的TFR,实现了一个端到端的深度学习(DL)框架。仿真结果表明,当SNR > -8 dB时,该方法能够实现接近100%的平均识别精度。即使在SNR为-10 dB时,其精度也能达到97%。同时,在低SNR情况下,其识别性能优于包括DCNN-RAMIML、DCNN-MLL和DCNN-MIML在内的现有算法。