Liu He, Liu Huaping, Wang Yikai, Sun Fuchun, Huang Wenbing
IEEE Trans Image Process. 2022;31:4050-4061. doi: 10.1109/TIP.2022.3180210. Epub 2022 Jun 14.
We propose a deep fine-grained multi-level fusion architecture for monocular 3D object detection, with an additionally designed anti-occlusion optimization process. Conventional monocular 3D object detection methods usually leverage geometry constraints such as keypoints, object shape relationships, and 3D to 2D optimizations to offset the lack of accurate depth information. However, these methods still struggle against directly extracting rich information for fusion from the depth estimation. To solve the problem, we integrate the monocular 3D features with the pseudo-LiDAR filter generation network between fine-grained multi-level layers. Our network utilizes the inherent multi-scale and promotes depth and semantic information flow in different stages. The new architecture can obtain features that incorporate more reliable depth information. At the same time, the problem of occlusion among objects is prevalent in natural scenes yet remains unsolved mainly. We propose a novel loss function that aims at alleviating the problem of occlusion. Extensive experiments have proved that the framework demonstrates a competitive performance, especially for the complex scenes with occlusion.
我们提出了一种用于单目3D目标检测的深度细粒度多级融合架构,并额外设计了一个抗遮挡优化过程。传统的单目3D目标检测方法通常利用几何约束,如关键点、物体形状关系以及3D到2D的优化,来弥补缺乏准确深度信息的不足。然而,这些方法在直接从深度估计中提取丰富信息进行融合方面仍存在困难。为了解决这个问题,我们在细粒度多级层之间将单目3D特征与伪激光雷达滤波器生成网络进行集成。我们的网络利用固有的多尺度特性,并促进不同阶段的深度和语义信息流。新架构能够获得包含更可靠深度信息的特征。同时,物体间的遮挡问题在自然场景中普遍存在且主要仍未得到解决。我们提出了一种新颖的损失函数,旨在缓解遮挡问题。大量实验证明,该框架表现出具有竞争力的性能,特别是对于存在遮挡的复杂场景。