Suppr超能文献

深度学习不确定性和置信度校准在结肠镜下五分类息肉分类中的应用。

Deep learning uncertainty and confidence calibration for the five-class polyp classification from colonoscopy.

机构信息

Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, SA 5005, Australia.

Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, SA 5005, Australia.

出版信息

Med Image Anal. 2020 May;62:101653. doi: 10.1016/j.media.2020.101653. Epub 2020 Feb 28.

Abstract

There are two challenges associated with the interpretability of deep learning models in medical image analysis applications that need to be addressed: confidence calibration and classification uncertainty. Confidence calibration associates the classification probability with the likelihood that it is actually correct - hence, a sample that is classified with confidence X% has a chance of X% of being correctly classified. Classification uncertainty estimates the noise present in the classification process, where such noise estimate can be used to assess the reliability of a particular classification result. Both confidence calibration and classification uncertainty are considered to be helpful in the interpretation of a classification result produced by a deep learning model, but it is unclear how much they affect classification accuracy and calibration, and how they interact. In this paper, we study the roles of confidence calibration (via post-process temperature scaling) and classification uncertainty (computed either from classification entropy or the predicted variance produced by Bayesian methods) in deep learning models. Results suggest that calibration and uncertainty improve classification interpretation and accuracy. This motivates us to propose a new Bayesian deep learning method that relies both on calibration and uncertainty to improve classification accuracy and model interpretability. Experiments are conducted on a recently proposed five-class polyp classification problem, using a data set containing 940 high-quality images of colorectal polyps, and results indicate that our proposed method holds the state-of-the-art results in terms of confidence calibration and classification accuracy.

摘要

在医学图像分析应用中,深度学习模型的可解释性面临两个挑战,需要加以解决:置信度校准和分类不确定性。置信度校准将分类概率与实际正确的可能性相关联——因此,被分类为置信度 X%的样本有 X%的机会被正确分类。分类不确定性估计分类过程中的噪声,其中可以使用这种噪声估计来评估特定分类结果的可靠性。置信度校准和分类不确定性都被认为有助于解释深度学习模型产生的分类结果,但尚不清楚它们对分类准确性和校准的影响程度,以及它们如何相互作用。在本文中,我们研究了置信度校准(通过后处理温度缩放)和分类不确定性(通过分类熵或贝叶斯方法产生的预测方差计算)在深度学习模型中的作用。结果表明,校准和不确定性提高了分类解释和准确性。这促使我们提出了一种新的基于贝叶斯的深度学习方法,该方法既依赖于校准又依赖于不确定性来提高分类准确性和模型可解释性。实验在最近提出的一个五分类息肉分类问题上进行,使用了一个包含 940 个高质量结直肠息肉图像的数据集,结果表明,我们提出的方法在置信度校准和分类准确性方面达到了最新水平。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验