School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
Jiangsu High Technology Research Key Laboratory for Wireless Sensor Networks, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
Sensors (Basel). 2024 Jun 21;24(13):4051. doi: 10.3390/s24134051.
Millimeter-wave radar-based identification technology has a wide range of applications in persistent identity verification, covering areas such as security production, healthcare, and personalized smart consumption systems. It has received extensive attention from the academic community due to its advantages of being non-invasive, environmentally insensitive and privacy-preserving. Existing identification algorithms mainly rely on a single signal, such as breathing or heartbeat. The reliability and accuracy of these algorithms are limited due to the high similarity of breathing patterns and the low signal-to-noise ratio of heartbeat signals. To address the above issues, this paper proposes an algorithm for multimodal fusion for identity recognition. This algorithm extracts and fuses features derived from phase signals, respiratory signals, and heartbeat signals for identity recognition purposes. The spatial features of signals with different modes are first extracted by the residual network (ResNet), after which these features are fused with a spatial-channel attention fusion module. On this basis, the temporal features are further extracted with a time series-based self-attention mechanism. Finally, the feature vectors of the user's vital sign modality are obtained to perform identity recognition. This method makes full use of the correlation and complementarity between different modal signals to improve the accuracy and reliability of identification. Simulation experiments show that the algorithm identity recognition proposed in this paper achieves an accuracy of 94.26% on a 20-subject self-test dataset, which is much higher than that of the traditional algorithm, which is about 85%.
基于毫米波雷达的识别技术在持久身份验证方面有广泛的应用,涵盖了安全生产、医疗保健和个性化智能消费系统等领域。由于其非侵入性、环境不敏感性和隐私保护等优势,该技术受到了学术界的广泛关注。现有的识别算法主要依赖于单一信号,如呼吸或心跳。由于呼吸模式的高度相似性和心跳信号的低信噪比,这些算法的可靠性和准确性受到限制。为了解决上述问题,本文提出了一种用于身份识别的多模态融合算法。该算法提取并融合相位信号、呼吸信号和心跳信号的特征,用于身份识别。首先通过残差网络(ResNet)提取不同模态信号的空间特征,然后通过空间通道注意力融合模块对这些特征进行融合。在此基础上,进一步通过基于时间序列的自注意力机制提取时间特征。最后,获取用户生命体征模态的特征向量进行身份识别。该方法充分利用了不同模态信号之间的相关性和互补性,提高了识别的准确性和可靠性。仿真实验表明,本文提出的算法在 20 个受试者的自测试数据集上的身份识别准确率达到 94.26%,远高于传统算法的约 85%。