Tian Huanhuan, Cai Li, Gui Yu, Cai Zhigang, Han Xianfeng, Liao Jianwei, Chen Li, Wang Yi
College of Computer and Information Science, Southwest University, No. 2, Tiansheng Road, Beibei District, Chongqing, 400715, China.
Department of Breast and Thyroid Surgery, Southwest Hospital of Third Military Medical University, No. 30, Gaotan Yanzheng Street, Shapingba District, Chongqing, 400380, China.
BMC Cancer. 2025 Mar 24;25(1):537. doi: 10.1186/s12885-025-13960-0.
In view of inherent attributes of breast BI-RADS 3, benign and malignant lesions are with a subtle difference and the imbalanced ratio (with a very small part of malignancy). The objective of this study is to improve the detection rate of BI-RADS 3 malignant lesions on breast ultrasound (US) images using deep convolution networks.
In the study, 1,275 lesions out of 1,096 patients were included from Southwest Hospital (SW) and Tangshan Hospital (TS). In which, 629 lesions, 218 lesions and 428 lesions were utilized for the development dataset, the internal and external testing set. All malignant lesions were biopsy-confirmed, while benign lesions were verified through biopsy or stable (no significant changes) over a three-year follow-up. And each lesion had both B-mode and color Doppler images. We proposed a two-step augmentation method, covering malignancy feature augmentation and data augmentation, and further verified its feasibility on a dual-branches ResNet50 classification model named Dual-ResNet50. We conducted a comparative analysis between our model and four radiologists in breast imaging diagnosis.
After malignancy feature and data augmentations, our model achieved a high area under the receiver operating characteristic curve (AUC) of 0.881 (95% CI: 0.830-0.921), the sensitivity of 77.8% (14/18), in the SW test set, and an AUC of 0.880 (95% CI: 0.847-0.910), a sensitivity of 71.4% (5/7) in the TS test set. Compared to four radiologists with over 10-years of diagnostic experience, our model outperformed their diagnoses.
Our proposed augmentation method can help the deep learning (DL) classification model to improve the breast cancer detection rate in BI-RADS 3 lesions, demonstrating its potential to enhance diagnostic accuracy in early breast cancer detection. This improvement aids in a timely adjustment of subsequent treatment for these patients in clinical practice.
鉴于乳腺影像报告和数据系统(BI-RADS)3类病变的固有属性,良性和恶性病变存在细微差异且比例失衡(恶性病变占比极小)。本研究的目的是利用深度卷积网络提高乳腺超声(US)图像上BI-RADS 3类恶性病变的检出率。
本研究纳入了西南医院(SW)和唐山医院(TS)1096例患者中的1275个病变。其中,629个病变、218个病变和428个病变分别用于开发数据集、内部和外部测试集。所有恶性病变均经活检证实,良性病变通过活检或在三年随访期间稳定(无显著变化)得到验证。每个病变均有B超和彩色多普勒图像。我们提出了一种两步增强方法,包括恶性特征增强和数据增强,并在名为Dual-ResNet50的双分支ResNet50分类模型上进一步验证了其可行性。我们在乳腺影像诊断中对我们的模型与四位放射科医生进行了对比分析。
经过恶性特征和数据增强后,我们的模型在SW测试集中的受试者操作特征曲线下面积(AUC)达到0.881(95%CI:0.830-0.921),灵敏度为77.8%(14/18);在TS测试集中的AUC为0.880(95%CI:0.847-0.910),灵敏度为71.4%(5/7)。与四位具有超过10年诊断经验的放射科医生相比,我们的模型在诊断方面表现更优。
我们提出的增强方法可帮助深度学习(DL)分类模型提高BI-RADS 3类病变中乳腺癌的检出率,证明其在提高早期乳腺癌检测诊断准确性方面的潜力。这一改进有助于在临床实践中及时调整这些患者的后续治疗。