Wang Wenjiang, Wang Zimeng, Wang Lei, Li Jiaojiao, Pang Zhiying, Qu Yingwu, Cui Shujun
Graduate Faculty, Hebei North University, No. 12 Changqing Road, Qiaoxi District, Zhangjiakou 075000, Hebei, China.
Department of Medical Imaging, Affiliated First Hospital of Hebei North University, No. 12 Changqing Road, Qiaoxi District, Zhangjiakou 075000, Hebei, China.
Magn Reson Imaging. 2025 Sep;121:110401. doi: 10.1016/j.mri.2025.110401. Epub 2025 May 11.
To develop a multiparametric breast MRI radiomics and deep learning-based multimodal model for predicting preoperative Ki-67 expression status in breast cancer, with the potential to advance individualized treatment and precision medicine for breast cancer patients.
We included 176 invasive breast cancer patients who underwent breast MRI and had Ki-67 results. The dataset was randomly split into training (70 %) and test (30 %) sets. Features from T1-weighted imaging (T1WI), diffusion-weighted imaging (DWI), T2-weighted imaging (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were fused. Separate models were created for each sequence: T1, DWI, T2, and DCE. A multiparametric MRI (mp-MRI) model was then developed by combining features from all sequences. Models were trained using five-fold cross-validation and evaluated on the test set with receiver operating characteristic (ROC) curve area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. Delong's test compared the mp-MRI model with the other models, with P < 0.05 indicating statistical significance.
All five models demonstrated good performance, with AUCs of 0.83 for the T1 model, 0.85 for the DWI model, 0.90 for the T2 model, 0.92 for the DCE model, and 0.96 for the mp-MRI model. Delong's test indicated statistically significant differences between the mp-MRI model and the other four models, with P values < 0.05.
The multiparametric breast MRI radiomics and deep learning-based multimodal model performs well in predicting preoperative Ki-67 expression status in breast cancer.
开发一种基于多参数乳腺MRI影像组学和深度学习的多模态模型,用于预测乳腺癌术前Ki-67表达状态,有望推动乳腺癌患者的个体化治疗和精准医疗。
我们纳入了176例接受乳腺MRI检查并获得Ki-67结果的浸润性乳腺癌患者。数据集随机分为训练集(70%)和测试集(30%)。融合了T1加权成像(T1WI)、扩散加权成像(DWI)、T2加权成像(T2WI)和动态对比增强MRI(DCE-MRI)的特征。为每个序列创建单独的模型:T1、DWI、T2和DCE。然后通过组合所有序列的特征开发多参数MRI(mp-MRI)模型。使用五折交叉验证对模型进行训练,并在测试集上用受试者操作特征(ROC)曲线下面积(AUC)、准确率、敏感性、特异性、阳性预测值、阴性预测值和F1分数进行评估。Delong检验将mp-MRI模型与其他模型进行比较,P<0.05表示具有统计学意义。
所有五个模型均表现出良好的性能,T1模型的AUC为0.83,DWI模型为0.85,T2模型为0.90,DCE模型为0.92,mp-MRI模型为0.96。Delong检验表明mp-MRI模型与其他四个模型之间存在统计学显著差异,P值<0.05。
基于多参数乳腺MRI影像组学和深度学习的多模态模型在预测乳腺癌术前Ki-67表达状态方面表现良好。