Greer Hastings, Kwitt Roland, Vialard François-Xavier, Niethammer Marc
Department of Computer Science, UNC Chapel Hill, USA.
Department of Computer Science, University of Salzburg, Austria.
Proc IEEE Int Conf Comput Vis. 2021 Oct;2021:3376-3385. doi: 10.1109/iccv48922.2021.00338.
Learning maps between data samples is fundamental. Applications range from representation learning, image translation and generative modeling, to the estimation of spatial deformations. Such maps relate feature vectors, or map between feature spaces. Well-behaved maps should be regular, which can be imposed explicitly or may emanate from the data itself. We explore what induces regularity for spatial transformations, e.g., when computing image registrations. Classical optimization-based models compute maps between pairs of samples and rely on an appropriate regularizer for well-posedness. Recent deep learning approaches have attempted to avoid using such regularizers altogether by relying on the sample population instead. We explore if it is possible to obtain spatial regularity using an inverse consistency loss only and elucidate what explains map regularity in such a context. We find that deep networks combined with an inverse consistency loss and randomized off-grid interpolation yield well behaved, approximately diffeomorphic, spatial transformations. Despite the simplicity of this approach, our experiments present compelling evidence, on both synthetic and real data, that regular maps can be obtained without carefully tuned explicit regularizers, while achieving competitive registration performance.
学习数据样本之间的映射至关重要。其应用范围涵盖表示学习、图像翻译和生成建模,乃至空间变形估计。此类映射关联特征向量,或在特征空间之间进行映射。表现良好的映射应具有正则性,这可以通过显式施加,也可能源自数据本身。我们探究是什么促使空间变换具有正则性,例如在计算图像配准时。基于经典优化的模型计算样本对之间的映射,并依靠适当的正则化项来保证适定性。最近的深度学习方法试图完全不使用此类正则化项,而是依靠样本总体。我们探究仅使用反向一致性损失是否有可能获得空间正则性,并阐明在此背景下是什么解释了映射的正则性。我们发现,深度网络结合反向一致性损失和随机离格插值可产生表现良好、近似微分同胚的空间变换。尽管此方法很简单,但我们的实验在合成数据和真实数据上均提供了令人信服的证据,表明无需精心调整显式正则化项即可获得正则映射,同时实现具有竞争力的配准性能。