Centre for Medical Image Computing and Department of Computer Science, UCL, Gower Street, London WC1E 6BT, UK; Healthcare Intelligence, Microsoft Research Cambridge, UK.
Machine Learning Lab, University of Amsterdam, the Netherlands.
Neuroimage. 2021 Jan 15;225:117366. doi: 10.1016/j.neuroimage.2020.117366. Epub 2020 Oct 9.
Deep learning (DL) has shown great potential in medical image enhancement problems, such as super-resolution or image synthesis. However, to date, most existing approaches are based on deterministic models, neglecting the presence of different sources of uncertainty in such problems. Here we introduce methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution. Specifically, we propose to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference, and integrate the two to quantify predictive uncertainty over the output image. Moreover, we introduce a method to propagate the predictive uncertainty on a multi-channelled image to derived scalar parameters, and separately quantify the effects of intrinsic and parameter uncertainty therein. The methods are evaluated for super-resolution of two different signal representations of diffusion MR images-Diffusion Tensor images and Mean Apparent Propagator MRI-and their derived quantities such as mean diffusivity and fractional anisotropy, on multiple datasets of both healthy and pathological human brains. Results highlight three key potential benefits of modelling uncertainty for improving the safety of DL-based image enhancement systems. Firstly, modelling uncertainty improves the predictive performance even when test data departs from training data ("out-of-distribution" datasets). Secondly, the predictive uncertainty highly correlates with reconstruction errors, and is therefore capable of detecting predictive "failures". Results on both healthy subjects and patients with brain glioma or multiple sclerosis demonstrate that such an uncertainty measure enables subject-specific and voxel-wise risk assessment of the super-resolved images that can be accounted for in subsequent analysis. Thirdly, we show that the method for decomposing predictive uncertainty into its independent sources provides high-level "explanations" for the model performance by separately quantifying how much uncertainty arises from the inherent difficulty of the task or the limited training examples. The introduced concepts of uncertainty modelling extend naturally to many other imaging modalities and data enhancement applications.
深度学习(DL)在医学图像增强问题方面显示出巨大的潜力,例如超分辨率或图像合成。然而,迄今为止,大多数现有方法都基于确定性模型,忽略了这些问题中存在的不同来源的不确定性。在这里,我们引入了方法来描述不确定性的不同组成部分,并使用扩散 MRI 超分辨率来演示这些想法。具体来说,我们建议通过异方差噪声模型来解释内在不确定性,通过近似贝叶斯推理来解释参数不确定性,并将两者结合起来量化输出图像的预测不确定性。此外,我们引入了一种方法,将预测不确定性传播到多通道图像上,以衍生出标量参数,并分别量化其中内在和参数不确定性的影响。该方法用于对两种不同的扩散 MR 图像信号表示——扩散张量图像和平均表观扩散系数 MRI——及其衍生的量,如平均扩散系数和各向异性分数,在多个健康和病理人类大脑的数据集上进行了评估。结果突出了为提高基于 DL 的图像增强系统的安全性而对不确定性进行建模的三个潜在优势。首先,即使测试数据偏离训练数据(“离群”数据集),建模不确定性也可以提高预测性能。其次,预测不确定性与重建误差高度相关,因此能够检测到预测“失败”。在健康受试者和患有脑胶质瘤或多发性硬化症的患者上的结果表明,这种不确定性度量能够对超分辨率图像进行特定于个体和体素的风险评估,并在后续分析中进行考虑。第三,我们表明,将预测不确定性分解为其独立来源的方法通过分别量化任务固有难度或有限训练示例引起的不确定性程度,为模型性能提供了高级“解释”。所引入的不确定性建模概念自然适用于许多其他成像模态和数据增强应用。