Shen Zhelun, Song Xibin, Dai Yuchao, Zhou Dingfu, Rao Zhibo, Zhang Liangjun
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):14301-14320. doi: 10.1109/TPAMI.2023.3300976. Epub 2023 Nov 3.
Due to the domain differences and unbalanced disparity distribution across multiple datasets, current stereo matching approaches are commonly limited to a specific dataset and generalize poorly to others. Such domain shift issue is usually addressed by substantial adaptation on costly target-domain ground-truth data, which cannot be easily obtained in practical settings. In this paper, we propose to dig into uncertainty estimation for robust stereo matching. Specifically, to balance the disparity distribution, we employ a pixel-level uncertainty estimation to adaptively adjust the next stage disparity searching space, in this way driving the network progressively prune out the space of unlikely correspondences. Then, to solve the limited ground truth data, an uncertainty-based pseudo-label is proposed to adapt the pre-trained model to the new domain, where pixel-level and area-level uncertainty estimation are proposed to filter out the high-uncertainty pixels of predicted disparity maps and generate sparse while reliable pseudo-labels to align the domain gap. Experimentally, our method shows strong cross-domain, adapt, and joint generalization and obtains 1st place on the stereo task of Robust Vision Challenge 2020. Additionally, our uncertainty-based pseudo-labels can be extended to train monocular depth estimation networks in an unsupervised way and even achieves comparable performance with the supervised methods.
由于多个数据集之间存在领域差异和不平衡的视差分布,当前的立体匹配方法通常局限于特定数据集,对其他数据集的泛化能力较差。这种领域转移问题通常通过对昂贵的目标领域真实数据进行大量适配来解决,而在实际场景中难以轻松获取这些数据。在本文中,我们建议深入研究用于鲁棒立体匹配的不确定性估计。具体而言,为了平衡视差分布,我们采用像素级不确定性估计来自适应调整下一阶段的视差搜索空间,以此驱动网络逐步剔除不太可能的对应关系空间。然后,为了解决真实数据有限的问题,我们提出了基于不确定性的伪标签,以使预训练模型适应新领域,其中提出了像素级和区域级不确定性估计,以滤除预测视差图中的高不确定性像素,并生成稀疏而可靠的伪标签来弥合领域差距。实验表明,我们的方法具有强大的跨领域、自适应和联合泛化能力,并在2020年鲁棒视觉挑战赛的立体任务中获得第一名。此外,我们基于不确定性的伪标签可以扩展到以无监督方式训练单目深度估计网络,甚至能达到与监督方法相当的性能。