Zhao Shanshan, Gong Mingming, Fu Huan, Tao Dacheng
IEEE Trans Image Process. 2021;30:5264-5276. doi: 10.1109/TIP.2021.3079821. Epub 2021 May 31.
Depth completion aims to recover a dense depth map from the sparse depth data and the corresponding single RGB image. The observed pixels provide the significant guidance for the recovery of the unobserved pixels' depth. However, due to the sparsity of the depth data, the standard convolution operation, exploited by most of existing methods, is not effective to model the observed contexts with depth values. To address this issue, we propose to adopt the graph propagation to capture the observed spatial contexts. Specifically, we first construct multiple graphs at different scales from observed pixels. Since the graph structure varies from sample to sample, we then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively. Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations. Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively. The proposed strategy preserves the original information for one modality and also absorbs complementary information from the other through learning the adaptive gating weights. Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks, i.e., KITTI and NYU-v2, and at the same time has fewer parameters than latest models. Our code is available at: https://github.com/sshan-zhao/ACMNet.
深度补全旨在从稀疏深度数据和相应的单张RGB图像中恢复密集深度图。已观测像素为未观测像素深度的恢复提供了重要指导。然而,由于深度数据的稀疏性,大多数现有方法所采用的标准卷积操作在利用深度值对已观测上下文进行建模时并不有效。为解决此问题,我们建议采用图传播来捕捉已观测空间上下文。具体而言,我们首先从已观测像素构建不同尺度的多个图。由于图结构因样本而异,我们随后在传播过程中应用注意力机制,这促使网络自适应地对上下文信息进行建模。此外,考虑到输入数据的多模态特性,我们分别在两种模态上利用图传播来提取多模态表示。最后,我们引入对称门控融合策略以有效利用提取的多模态特征。所提出的策略保留了一种模态的原始信息,同时通过学习自适应门控权重从另一种模态吸收互补信息。我们的模型名为自适应上下文感知多模态网络(ACMNet),在两个基准数据集即KITTI和NYU-v2上取得了最优性能,同时与最新模型相比参数更少。我们的代码可在以下网址获取:https://github.com/sshan-zhao/ACMNet。