Faculty of Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada.
Sensors (Basel). 2023 Apr 18;23(8):4080. doi: 10.3390/s23084080.
Regularization is an important technique for training deep neural networks. In this paper, we propose a novel shared-weight teacher-student strategy and a content-aware regularization (CAR) module. Based on a tiny, learnable, content-aware mask, CAR is randomly applied to some channels in the convolutional layers during training to be able to guide predictions in a shared-weight teacher-student strategy. CAR prevents motion estimation methods in unsupervised learning from co-adaptation. Extensive experiments on optical flow and scene flow estimation show that our method significantly improves on the performance of the original networks and surpasses other popular regularization methods. The method also surpasses all variants with similar architectures and the supervised PWC-Net on MPI-Sintel and on KITTI. Our method shows strong cross-dataset generalization, i.e., our method solely trained on MPI-Sintel outperforms a similarly trained supervised PWC-Net by 27.9% and 32.9% on KITTI, respectively. Our method uses fewer parameters and less computation, and has faster inference times than the original PWC-Net.
正则化是训练深度神经网络的重要技术。在本文中,我们提出了一种新颖的共享权重师生策略和内容感知正则化 (CAR) 模块。基于一个微小的、可学习的、内容感知的掩模,CAR 在训练期间随机应用于卷积层的一些通道中,以能够在共享权重师生策略中引导预测。CAR 防止无监督学习中的运动估计方法协同适应。在光流和场景流估计方面的大量实验表明,我们的方法显著提高了原始网络的性能,并超过了其他流行的正则化方法。该方法还超过了具有类似架构的所有变体以及 MPI-Sintel 和 KITTI 上的监督 PWC-Net。我们的方法表现出很强的跨数据集泛化能力,即在 MPI-Sintel 上仅进行训练的方法在 KITTI 上分别比类似训练的监督 PWC-Net 高出 27.9%和 32.9%。我们的方法使用的参数和计算资源更少,推断时间比原始 PWC-Net 更快。