Qin Zengyi, Wang Jinglu, Lu Yan
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5170-5184. doi: 10.1109/TPAMI.2021.3074363. Epub 2022 Aug 4.
Detecting and localizing objects in the real 3D space, which plays a crucial role in scene understanding, is particularly challenging given only a monocular image due to the geometric information loss during imagery projection. We propose MonoGRNet for the amodal 3D object detection from a monocular image via geometric reasoning in both the observed 2D projection and the unobserved depth dimension. MonoGRNet decomposes the monocular 3D object detection task into four sub-tasks including 2D object detection, instance-level depth estimation, projected 3D center estimation and local corner regression. The task decomposition significantly facilitates the monocular 3D object detection, allowing the target 3D bounding boxes to be efficiently predicted in a single forward pass, without using object proposals, post-processing or the computationally expensive pixel-level depth estimation utilized by previous methods. In addition, MonoGRNet flexibly adapts to both fully and weakly supervised learning, which improves the feasibility of our framework in diverse settings. Experiments are conducted on KITTI, Cityscapes and MS COCO datasets. Results demonstrate the promising performance of our framework in various scenarios.
在真实3D空间中检测和定位物体,这在场景理解中起着至关重要的作用。由于图像投影过程中的几何信息丢失,仅给定单目图像时,该任务极具挑战性。我们提出了MonoGRNet,用于通过在观察到的2D投影和未观察到的深度维度中进行几何推理,从单目图像中进行无模态3D物体检测。MonoGRNet将单目3D物体检测任务分解为四个子任务,包括2D物体检测、实例级深度估计、投影3D中心估计和局部角点回归。任务分解显著促进了单目3D物体检测,使得目标3D边界框能够在单次前向传播中高效预测,无需使用物体提议、后处理或先前方法中计算成本高昂的像素级深度估计。此外,MonoGRNet灵活适应全监督和弱监督学习,这提高了我们的框架在不同设置下的可行性。在KITTI、Cityscapes和MS COCO数据集上进行了实验。结果证明了我们的框架在各种场景下具有良好的性能。