Feng Qijun, Zhao Chunyang, Liu Pengfei, Zhang Zhichao, Jin Yue, Tian Wanglin
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China.
Shenyang Institute of Automation Chinese Academy of Sciences, Shenyang 110169, China.
Sensors (Basel). 2025 Jun 28;25(13):4040. doi: 10.3390/s25134040.
This paper presents a novel multi-view 3D object detection framework, Long-Term Spatial-Temporal Bird's-Eye View (LST-BEV), designed to improve performance in autonomous driving. Traditional 3D detection relies on sensors like LiDAR, but visual perception using multi-camera systems is emerging as a more cost-effective solution. Existing methods struggle with capturing long-range dependencies and cross-task information due to limitations in attention mechanisms. To address this, we propose a Long-Range Cross-Task Detection Head (LRCH) to capture these dependencies and integrate cross-task information for accurate predictions. Additionally, we introduce the Long-Term Temporal Perception Module (LTPM), which efficiently extracts temporal features by combining Mamba and linear attention, overcoming challenges in temporal frame extraction. Experimental results in the nuScenes dataset demonstrate that our proposed LST-BEV outperforms its baseline (SA-BEVPool) by 2.1% mAP and 2.7% NDS, indicating a significant performance improvement.
本文提出了一种新颖的多视图3D目标检测框架,即长期时空鸟瞰视图(LST-BEV),旨在提高自动驾驶中的性能。传统的3D检测依赖于激光雷达等传感器,但使用多摄像头系统的视觉感知正成为一种更具成本效益的解决方案。由于注意力机制的限制,现有方法在捕获长距离依赖关系和跨任务信息方面存在困难。为了解决这个问题,我们提出了一种长距离跨任务检测头(LRCH)来捕获这些依赖关系,并集成跨任务信息以进行准确预测。此外,我们引入了长期时间感知模块(LTPM),它通过结合曼巴和线性注意力有效地提取时间特征,克服了时间帧提取中的挑战。在nuScenes数据集中的实验结果表明,我们提出的LST-BEV比其基线(SA-BEVPool)的平均精度均值(mAP)高2.1%,归一化检测分数(NDS)高2.7%,表明性能有显著提升。