Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada.
Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University Medical School, Jeonju City, Jeollabuk-Do, South Korea.
Ultrasound Med Biol. 2020 May;46(5):1119-1132. doi: 10.1016/j.ultrasmedbio.2020.01.001. Epub 2020 Feb 12.
To assist radiologists in breast cancer classification in automated breast ultrasound (ABUS) imaging, we propose a computer-aided diagnosis based on a convolutional neural network (CNN) that classifies breast lesions as benign and malignant. The proposed CNN adopts a modified Inception-v3 architecture to provide efficient feature extraction in ABUS imaging. Because the ABUS images can be visualized in transverse and coronal views, the proposed CNN provides an efficient way to extract multiview features from both views. The proposed CNN was trained and evaluated on 316 breast lesions (135 malignant and 181 benign). An observer performance test was conducted to compare five human reviewers' diagnostic performance before and after referring to the predicting outcomes of the proposed CNN. Our method achieved an area under the curve (AUC) value of 0.9468 with five-folder cross-validation, for which the sensitivity and specificity were 0.886 and 0.876, respectively. Compared with conventional machine learning-based feature extraction schemes, particularly principal component analysis (PCA) and histogram of oriented gradients (HOG), our method achieved a significant improvement in classification performance. The proposed CNN achieved a >10% increased AUC value compared with PCA and HOG. During the observer performance test, the diagnostic results of all human reviewers had increased AUC values and sensitivities after referring to the classification results of the proposed CNN, and four of the five human reviewers' AUCs were significantly improved. The proposed CNN employing a multiview strategy showed promise for the diagnosis of breast cancer, and could be used as a second reviewer for increasing diagnostic reliability.
为了协助放射科医生在自动乳腺超声(ABUS)成像中对乳腺癌进行分类,我们提出了一种基于卷积神经网络(CNN)的计算机辅助诊断方法,用于对乳腺病变进行良性和恶性分类。所提出的 CNN 采用了改进的 Inception-v3 架构,以在 ABUS 成像中提供高效的特征提取。由于 ABUS 图像可以在横切面和冠状面视图中可视化,因此所提出的 CNN 提供了一种从两个视图中提取多视图特征的有效方法。该 CNN 在 316 个乳腺病变(135 个恶性和 181 个良性)上进行了训练和评估。进行了观察者性能测试,以比较五位观察者在参考所提出的 CNN 的预测结果前后的诊断性能。我们的方法在五重交叉验证下获得了 0.9468 的曲线下面积(AUC)值,其中敏感性和特异性分别为 0.886 和 0.876。与基于传统机器学习的特征提取方案(特别是主成分分析(PCA)和方向梯度直方图(HOG))相比,我们的方法在分类性能方面取得了显著提高。与 PCA 和 HOG 相比,所提出的 CNN 实现了> 10%的 AUC 值增加。在观察者性能测试中,所有观察者的诊断结果在参考所提出的 CNN 的分类结果后,AUC 值和敏感性均有所提高,其中五位观察者中的四位 AUC 得到了显著提高。采用多视图策略的所提出的 CNN 显示出对乳腺癌诊断的潜力,并且可以用作增加诊断可靠性的第二审阅者。