Wang Wenjiang, Li Jiaojiao, Wang Zimeng, Liu Yanjun, Yang Fei, Cui Shujun
Graduate Faculty, Hebei North University, Zhangjiakou, Hebei, China.
Department of Medical Imaging, The First Affiliated Hospital of Hebei North University, Zhangjiakou, Hebei, China.
Eur J Radiol Open. 2024 Oct 21;13:100607. doi: 10.1016/j.ejro.2024.100607. eCollection 2024 Dec.
To develop a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning for the classification of benign and malignant breast lesions, to assist clinicians in better selecting treatment plans.
A total of 314 patients who underwent breast MRI examinations were included. They were randomly divided into training, validation, and test sets in a ratio of 7:1:2. Subsequently, features of T1-weighted images (T1WI), T2-weighted images (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were extracted using the convolutional neural network ResNet50 for fusion, and then combined with radiomic features from the three sequences. The following models were established: T1 model, T2 model, DCE model, DCE_T1_T2 model, and DCE_T1_T2_rad model. The performance of the models was evaluated by the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. The differences between the DCE_T1_T2_rad model and the other four models were compared using the Delong test, with a -value < 0.05 considered statistically significant.
The five models established in this study performed well, with AUC values of 0.53 for the T1 model, 0.62 for the T2 model, 0.79 for the DCE model, 0.94 for the DCE_T1_T2 model, and 0.98 for the DCE_T1_T2_rad model. The DCE_T1_T2_rad model showed statistically significant differences ( < 0.05) compared to the other four models.
The use of a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning can effectively improve the diagnostic performance of breast lesion classification.
开发一种结合多序列乳腺MRI融合放射组学和深度学习的多模态模型,用于乳腺良恶性病变的分类,以协助临床医生更好地选择治疗方案。
纳入314例行乳腺MRI检查的患者。他们以7:1:2的比例随机分为训练集、验证集和测试集。随后,使用卷积神经网络ResNet50提取T1加权图像(T1WI)、T2加权图像(T2WI)和动态对比增强MRI(DCE-MRI)的特征进行融合,然后与三个序列的放射组学特征相结合。建立了以下模型:T1模型、T2模型、DCE模型、DCE_T1_T2模型和DCE_T1_T2_rad模型。通过受试者操作特征(ROC)曲线下面积(AUC)、准确率、灵敏度、特异度、阳性预测值和阴性预测值评估模型性能。使用德龙检验比较DCE_T1_T2_rad模型与其他四个模型之间的差异,P值<0.05认为具有统计学意义。
本研究建立的五个模型表现良好,T1模型的AUC值为0.53,T2模型为0.62,DCE模型为0.79,DCE_T1_T2模型为0.94,DCE_T1_T2_rad模型为0.98。与其他四个模型相比,DCE_T1_T2_rad模型显示出统计学显著差异(P<0.05)。
使用结合多序列乳腺MRI融合放射组学和深度学习的多模态模型可有效提高乳腺病变分类的诊断性能。