Duan Wenfeng, Wu Zhiheng, Zhu Huijun, Zhu Zhiyun, Liu Xiang, Shu Yongqiang, Zhu Xishun, Wu Jianhua, Peng Dechang
Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University Nanchang, Jiangxi, China.
School of Information Engineering, Nanchang University Nanchang, Jiangxi, China.
Am J Transl Res. 2024 Jun 15;16(6):2411-2422. doi: 10.62347/PUHR6185. eCollection 2024.
The estrogen receptor (ER) serves as a pivotal indicator for assessing endocrine therapy efficacy and breast cancer prognosis. Invasive biopsy is a conventional approach for appraising ER expression levels, but it bears disadvantages due to tumor heterogeneity. To address the issue, a deep learning model leveraging mammography images was developed in this study for accurate evaluation of ER status in patients with breast cancer.
To predict the ER status in breast cancer patients with a newly developed deep learning model leveraging mammography images.
Datasets comprising preoperative mammography images, ER expression levels, and clinical data spanning from October 2016 to October 2021 were retrospectively collected from 358 patients diagnosed with invasive ductal carcinoma. Following collection, these datasets were divided into a training dataset (n = 257) and a testing dataset (n = 101). Subsequently, a deep learning prediction model, referred to as IP-SE-DResNet model, was developed utilizing two deep residual networks along with the Squeeze-and-Excitation attention mechanism. This model was tailored to forecast the ER status in breast cancer patients utilizing mammography images from both craniocaudal view and mediolateral oblique view. Performance measurements including prediction accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves (AUCs) were employed to assess the effectiveness of the model.
In the training dataset, the AUCs for the IP-SE-DResNet model utilizing mammography images from the craniocaudal view, mediolateral oblique view, and the combined images from both views, were 0.849 (95% CIs: 0.809-0.868), 0.858 (95% CIs: 0.813-0.872), and 0.895 (95% CIs: 0.866-0.913), respectively. Correspondingly, the AUCs for these three image categories in the testing dataset were 0.835 (95% CIs: 0.790-0.887), 0.746 (95% CIs: 0.793-0.889), and 0.886 (95% CIs: 0.809-0.934), respectively. A comprehensive comparison between performance measurements underscored a substantial enhancement achieved by the proposed IP-SE-DResNet model in contrast to a traditional radiomics model employing the naive Bayesian classifier. For the latter, the AUCs stood at only 0.614 (95% CIs: 0.594-0.638) in the training dataset and 0.613 (95% CIs: 0.587-0.654) in the testing dataset, both utilizing a combination of mammography images from the craniocaudal and mediolateral oblique views.
The proposed IP-SE-DResNet model presents a potent and non-invasive approach for predicting ER status in breast cancer patients, potentially enhancing the efficiency and diagnostic precision of radiologists.
雌激素受体(ER)是评估内分泌治疗疗效和乳腺癌预后的关键指标。侵入性活检是评估ER表达水平的传统方法,但由于肿瘤异质性而存在缺点。为解决该问题,本研究开发了一种利用乳房X线摄影图像的深度学习模型,用于准确评估乳腺癌患者的ER状态。
利用新开发的基于乳房X线摄影图像的深度学习模型预测乳腺癌患者的ER状态。
回顾性收集了2016年10月至2021年10月期间358例浸润性导管癌患者的术前乳房X线摄影图像、ER表达水平和临床数据。收集后,将这些数据集分为训练数据集(n = 257)和测试数据集(n = 101)。随后,利用两个深度残差网络和挤压激励注意力机制开发了一种深度学习预测模型,称为IP-SE-DResNet模型。该模型旨在利用头尾位视图和内外侧斜位视图的乳房X线摄影图像预测乳腺癌患者的ER状态。采用预测准确性、敏感性、特异性和受试者工作特征曲线下面积(AUC)等性能指标评估模型的有效性。
在训练数据集中,IP-SE-DResNet模型利用头尾位视图、内外侧斜位视图及两者组合图像的AUC分别为0.849(95%CI:0.809-0.868)、0.858(95%CI:0.813-0.872)和0.895(95%CI:0.866-0.913)。相应地,测试数据集中这三类图像的AUC分别为0.835(95%CI:0.790-0.887)、0.746(95%CI:0.793-0.889)和0.886(95%CI:0.809-0.934)。性能指标的综合比较强调了与采用朴素贝叶斯分类器的传统放射组学模型相比,所提出的IP-SE-DResNet模型有显著改进。对于后者,在训练数据集中,利用头尾位和内外侧斜位视图组合的乳房X线摄影图像的AUC仅为0.614(95%CI:0.594-0.638),在测试数据集中为0.613(95%CI:0.587-0.654)。
所提出的IP-SE-DResNet模型为预测乳腺癌患者的ER状态提供了一种有效且非侵入性的方法,可能提高放射科医生的工作效率和诊断准确性。