Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
Institute of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität, and Berlin Institute of Health, 10117, Berlin, Germany.
Eur Radiol. 2019 Jul;29(7):3348-3357. doi: 10.1007/s00330-019-06214-8. Epub 2019 May 15.
To develop a proof-of-concept "interpretable" deep learning prototype that justifies aspects of its predictions from a pre-trained hepatic lesion classifier.
A convolutional neural network (CNN) was engineered and trained to classify six hepatic tumor entities using 494 lesions on multi-phasic MRI, described in Part 1. A subset of each lesion class was labeled with up to four key imaging features per lesion. A post hoc algorithm inferred the presence of these features in a test set of 60 lesions by analyzing activation patterns of the pre-trained CNN model. Feature maps were generated that highlight regions in the original image that correspond to particular features. Additionally, relevance scores were assigned to each identified feature, denoting the relative contribution of a feature to the predicted lesion classification.
The interpretable deep learning system achieved 76.5% positive predictive value and 82.9% sensitivity in identifying the correct radiological features present in each test lesion. The model misclassified 12% of lesions. Incorrect features were found more often in misclassified lesions than correctly identified lesions (60.4% vs. 85.6%). Feature maps were consistent with original image voxels contributing to each imaging feature. Feature relevance scores tended to reflect the most prominent imaging criteria for each class.
This interpretable deep learning system demonstrates proof of principle for illuminating portions of a pre-trained deep neural network's decision-making, by analyzing inner layers and automatically describing features contributing to predictions.
• An interpretable deep learning system prototype can explain aspects of its decision-making by identifying relevant imaging features and showing where these features are found on an image, facilitating clinical translation. • By providing feedback on the importance of various radiological features in performing differential diagnosis, interpretable deep learning systems have the potential to interface with standardized reporting systems such as LI-RADS, validating ancillary features and improving clinical practicality. • An interpretable deep learning system could potentially add quantitative data to radiologic reports and serve radiologists with evidence-based decision support.
开发一个具有概念验证的“可解释”深度学习原型,该原型可以从预先训练好的肝脏病变分类器中为其预测的各个方面提供依据。
使用多期 MRI 上的 494 个病变,设计并训练了一个卷积神经网络 (CNN) 来对六种肝脏肿瘤实体进行分类。每个病变类别的一个子集都用每个病变多达四个关键成像特征进行了标记。事后算法通过分析预先训练的 CNN 模型的激活模式,推断出 60 个测试病变中的这些特征的存在。生成特征图,突出原始图像中与特定特征相对应的区域。此外,为每个识别出的特征分配了相关分数,表示特征对预测病变分类的相对贡献。
可解释的深度学习系统在识别每个测试病变中存在的正确放射学特征方面达到了 76.5%的阳性预测值和 82.9%的敏感性。该模型错误分类了 12%的病变。在错误分类的病变中,发现错误特征的频率高于正确识别的病变(60.4%对 85.6%)。特征图与原始图像体素一致,这些体素对每个成像特征都有贡献。特征相关性得分往往反映了每个类别的最突出的成像标准。
这个可解释的深度学习系统通过分析内部层并自动描述有助于预测的特征,为阐明预先训练的深度神经网络决策的某些部分提供了原理证明。
• 可解释的深度学习系统原型可以通过识别相关的成像特征并显示这些特征在图像上的位置,从而解释其决策的某些方面,从而促进临床转化。
• 通过提供有关在进行鉴别诊断时各种放射学特征重要性的反馈,可解释的深度学习系统有可能与 LI-RADS 等标准化报告系统接口,验证辅助特征并提高临床实用性。
• 可解释的深度学习系统可以为放射报告添加定量数据,并为放射科医生提供基于证据的决策支持。