School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China.
School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing 100101, China..
Sensors (Basel). 2020 Feb 11;20(4):956. doi: 10.3390/s20040956.
For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.
对于自动驾驶而言,准确检测所有尺度的障碍物对于保障安全至关重要。在本文中,我们提出了一种新的基于毫米波雷达和视觉传感器的障碍物检测的空间注意力融合(SAF)方法,其中考虑了雷达点的稀疏性。所提出的融合方法可以嵌入到特征提取阶段,有效地利用了毫米波雷达和视觉传感器的特征。基于 SAF,生成一个注意力权重矩阵来融合视觉特征,这与串联融合和元素级加融合不同。此外,所提出的 SAF 可以通过端到端的方式与最近的深度学习目标检测框架结合进行训练。此外,我们构建了一个生成模型,将雷达点转换为雷达图像,以便于神经网络训练。数值结果表明,新开发的融合方法在公共基准测试中取得了优异的性能。此外,源代码将在 GitHub 上发布。