Byra Michał, Dobruch-Sobczak Katarzyna, Piotrzkowska-Wroblewska Hanna, Klimonda Ziemowit, Litniewski Jerzy
Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
Radiology Department II, Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland.
J Ultrason. 2022 Apr 27;22(89):70-75. doi: 10.15557/JoU.2022.0013. eCollection 2022 Apr.
Deep neural networks have achieved good performance in breast mass classification in ultrasound imaging. However, their usage in clinical practice is still limited due to the lack of explainability of decisions conducted by the networks. In this study, to address the explainability problem, we generated saliency maps indicating ultrasound image regions important for the network's classification decisions.
Ultrasound images were collected from 272 breast masses, including 123 malignant and 149 benign. Transfer learning was applied to develop a deep network for breast mass classification. Next, the class activation mapping technique was used to generate saliency maps for each image. Breast mass images were divided into three regions: the breast mass region, the peritumoral region surrounding the breast mass, and the region below the breast mass. The pointing game metric was used to quantitatively assess the overlap between the saliency maps and the three selected US image regions.
Deep learning classifier achieved the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of 0.887, 0.835, 0.801, and 0.868, respectively. In the case of the correctly classified test US images, analysis of the saliency maps revealed that the decisions of the network could be associated with the three selected regions in 71% of cases.
Our study is an important step toward better understanding of deep learning models developed for breast mass diagnosis. We demonstrated that the decisions made by the network can be related to the appearance of certain tissue regions in breast mass US images.
深度神经网络在超声成像的乳腺肿块分类中取得了良好的性能。然而,由于网络做出的决策缺乏可解释性,它们在临床实践中的应用仍然有限。在本研究中,为了解决可解释性问题,我们生成了显著图,以指示对网络分类决策重要的超声图像区域。
收集了272个乳腺肿块的超声图像,其中包括123个恶性肿块和149个良性肿块。应用迁移学习来开发用于乳腺肿块分类的深度网络。接下来,使用类激活映射技术为每个图像生成显著图。乳腺肿块图像被分为三个区域:乳腺肿块区域、乳腺肿块周围的瘤周区域以及乳腺肿块下方的区域。使用指向游戏指标来定量评估显著图与三个选定的超声图像区域之间的重叠。
深度学习分类器的受试者工作特征曲线下面积、准确率、灵敏度和特异性分别为0.887、0.835、0.801和0.868。在正确分类的测试超声图像中,对显著图的分析表明,在71%的病例中,网络的决策可能与三个选定区域相关。
我们的研究是朝着更好地理解为乳腺肿块诊断开发的深度学习模型迈出的重要一步。我们证明了网络做出的决策可能与乳腺肿块超声图像中某些组织区域的外观有关。