Department of Mathematics, University of Oslo, 0316 Oslo, Norway.
Instituto de Telecomunicações, Faculdade de Ciências, Universidade do Porto, Porto 4169-007, Portugal.
Proc Natl Acad Sci U S A. 2020 Dec 1;117(48):30088-30095. doi: 10.1073/pnas.1907377117. Epub 2020 May 11.
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
深度学习在图像分类等任务中取得了前所未有的成功,它已经成为图像重建的一种新工具,有可能改变这个领域。在本文中,我们展示了一个关键现象:深度学习通常会产生图像重建不稳定的方法。这些不稳定性通常以几种形式出现:1)图像和采样域中某些微小的、几乎难以察觉的扰动,可能会导致重建中出现严重的伪影;2)小的结构变化,例如肿瘤,可能不会被重建图像捕捉到;3)(一种反直觉的不稳定性类型)更多的样本可能会导致性能下降。我们使用算法和易于使用的软件进行的稳定性测试检测到了这些不稳定性现象。该测试旨在供研究人员测试他们的网络是否存在不稳定性,以及供政府机构(如食品和药物管理局(FDA))确保深度学习方法的安全使用。