Effland Alexander, Kobler Erich, Kunisch Karl, Pock Thomas
1Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.
2Institute of Mathematics and Scientific Computing, University of Graz, Graz, Austria.
J Math Imaging Vis. 2020;62(3):396-416. doi: 10.1007/s10851-019-00926-8. Epub 2020 Mar 11.
We investigate a well-known phenomenon of variational approaches in image processing, where typically the best image quality is achieved when the gradient flow process is stopped before converging to a stationary point. This paradox originates from a tradeoff between optimization and modeling errors of the underlying variational model and holds true even if deep learning methods are used to learn highly expressive regularizers from data. In this paper, we take advantage of this paradox and introduce an optimal stopping time into the gradient flow process, which in turn is learned from data by means of an optimal control approach. After a time discretization, we obtain variational networks, which can be interpreted as a particular type of recurrent neural networks. The learned variational networks achieve competitive results for image denoising and image deblurring on a standard benchmark data set. One of the key theoretical results is the development of first- and second-order conditions to verify optimal stopping time. A nonlinear spectral analysis of the gradient of the learned regularizer gives enlightening insights into the different regularization properties.
我们研究了图像处理中变分方法的一个著名现象,即在梯度流过程收敛到驻点之前停止时,通常能获得最佳图像质量。这种矛盾源于基础变分模型的优化误差和建模误差之间的权衡,即使使用深度学习方法从数据中学习高表达性正则化器,这种矛盾依然存在。在本文中,我们利用这一矛盾,在梯度流过程中引入一个最优停止时间,该时间又通过最优控制方法从数据中学习得到。经过时间离散化后,我们得到了变分网络,它可以被解释为一种特殊类型的循环神经网络。在标准基准数据集上,所学习到的变分网络在图像去噪和图像去模糊方面取得了具有竞争力的结果。关键的理论成果之一是开发了用于验证最优停止时间的一阶和二阶条件。对所学习到的正则化器梯度的非线性谱分析为不同的正则化特性提供了有启发性的见解。