Jeon Suyeon, Heo Yong Seok
Department of Artificial Intelligence, Ajou University, Suwon 16499, Korea.
Department of Electrical and Computer Engineering, Ajou University, Suwon 16499, Korea.
Sensors (Basel). 2022 Jul 23;22(15):5500. doi: 10.3390/s22155500.
While recent deep learning-based stereo-matching networks have shown outstanding advances, there are still some unsolved challenges. First, most state-of-the-art stereo models employ 3D convolutions for 4D cost volume aggregation, which limit the deployment of networks for resource-limited mobile environments owing to heavy consumption of computation and memory. Although there are some efficient networks, most of them still require a heavy computational cost to incorporate them to mobile computing devices in real-time. Second, most stereo networks indirectly supervise cost volumes through disparity regression loss by using the softargmax function. This causes problems in ambiguous regions, such as the boundaries of objects, because there are many possibilities for unreasonable cost distributions which result in overfitting problem. A few works deal with this problem by generating artificial cost distribution using only the ground truth disparity value that is insufficient to fully regularize the cost volume. To address these problems, we first propose an efficient multi-scale sequential feature fusion network (MSFFNet). Specifically, we connect multi-scale SFF modules in parallel with a cross-scale fusion function to generate a set of cost volumes with different scales. These cost volumes are then effectively combined using the proposed interlaced concatenation method. Second, we propose an adaptive cost-volume-filtering (ACVF) loss function that directly supervises our estimated cost volume. The proposed ACVF loss directly adds constraints to the cost volume using the probability distribution generated from the ground truth disparity map and that estimated from the teacher network which achieves higher accuracy. Results of several experiments using representative datasets for stereo matching show that our proposed method is more efficient than previous methods. Our network architecture consumes fewer parameters and generates reasonable disparity maps with faster speed compared with the existing state-of-the art stereo models. Concretely, our network achieves 1.01 EPE with runtime of 42 ms, 2.92M parameters, and 97.96G FLOPs on the Scene Flow test set. Compared with PSMNet, our method is 89% faster and 7% more accurate with 45% fewer parameters.
尽管最近基于深度学习的立体匹配网络取得了显著进展,但仍存在一些未解决的挑战。首先,大多数最先进的立体模型采用3D卷积进行4D代价体聚合,由于计算和内存消耗大,这限制了网络在资源有限的移动环境中的部署。虽然有一些高效的网络,但大多数仍需要大量计算成本才能实时集成到移动计算设备中。其次,大多数立体网络通过使用softargmax函数通过视差回归损失间接监督代价体。这在模糊区域(如物体边界)会导致问题,因为存在许多不合理的代价分布可能性,从而导致过拟合问题。一些工作通过仅使用地面真值视差值生成人工代价分布来处理这个问题,而这不足以完全正则化代价体。为了解决这些问题,我们首先提出了一种高效的多尺度顺序特征融合网络(MSFFNet)。具体来说,我们将多尺度SFF模块与跨尺度融合函数并行连接,以生成一组不同尺度的代价体。然后使用所提出的交错连接方法有效地组合这些代价体。其次,我们提出了一种自适应代价体滤波(ACVF)损失函数,直接监督我们估计的代价体。所提出的ACVF损失使用从地面真值视差图生成的概率分布和从教师网络估计的概率分布直接向代价体添加约束,教师网络具有更高的准确性。使用代表性立体匹配数据集进行的几个实验结果表明,我们提出的方法比以前的方法更有效。与现有的最先进立体模型相比,我们的网络架构消耗的参数更少,生成合理视差图的速度更快。具体而言,我们的网络在场景流测试集上实现了1.01的端点误差(EPE),运行时间为42毫秒,有292万个参数和979.6亿次浮点运算(FLOPs)。与PSMNet相比,我们的方法速度快89%,准确率高7%,参数少45%。