Zhang Jingjing, Yi Qingwu, Huang Lu, Yang Zihan, Cheng Jianqiang, Zhang Heng
State Key Laboratory of Satellite Navigation System and Equipment Technology, Shijiazhuang 050081, China.
The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050081, China.
Sensors (Basel). 2023 Oct 18;23(20):8552. doi: 10.3390/s23208552.
None-Line-of-Sight (NLOS) propagation of Ultra-Wideband (UWB) signals leads to a decrease in the reliability of positioning accuracy. Therefore, it is essential to identify the channel environment prior to localization to preserve the high-accuracy Line-of-Sight (LOS) ranging results and correct or reject the NLOS ranging results with positive bias. Aiming at the problem of the low accuracy and poor generalization ability of NLOS/LOS identification methods based on Channel Impulse Response (CIR) at present, the multilayer Convolutional Neural Networks (CNN) combined with Channel Attention Module (CAM) for NLOS/LOS identification method is proposed. Firstly, the CAM is embedded in the multilayer CNN to extract the time-domain data features of the original CIR. Then, the global average pooling layer is used to replace the fully connected layer for feature integration and classification output. In addition, the public dataset from the European Horizon 2020 Programme project eWINE is used to perform comparative experiments with different structural models and different identification methods. The results show that the proposed CNN-CAM model has a LOS recall of 92.29%, NLOS recall of 87.71%, accuracy of 90.00%, and F1-score of 90.22%. Compared with the current relatively advanced technology, it has better performance advantages.
超宽带(UWB)信号的非视距(NLOS)传播会导致定位精度可靠性降低。因此,在定位之前识别信道环境至关重要,以保留高精度的视距(LOS)测距结果,并纠正或拒绝具有正偏差的NLOS测距结果。针对目前基于信道冲激响应(CIR)的NLOS/LOS识别方法精度低、泛化能力差的问题,提出了一种结合信道注意力模块(CAM)的多层卷积神经网络(CNN)用于NLOS/LOS识别方法。首先,将CAM嵌入到多层CNN中,以提取原始CIR的时域数据特征。然后,使用全局平均池化层代替全连接层进行特征整合和分类输出。此外,利用来自欧洲地平线2020计划项目eWINE的公共数据集,与不同结构模型和不同识别方法进行对比实验。结果表明,所提出的CNN-CAM模型的LOS召回率为92.29%,NLOS召回率为87.71%,准确率为90.00%,F1分数为90.22%。与当前相对先进的技术相比,具有更好的性能优势。