Department of Health Management, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China.
Department of Health Management Shandong University of Traditional Chinese Medicine, Jinan City, Shandong Province, China.
J Cancer Res Ther. 2024 Apr 1;20(2):625-632. doi: 10.4103/jcrt.jcrt_1796_23. Epub 2024 Apr 30.
To establish a multimodal model for distinguishing benign and malignant breast lesions.
Clinical data, mammography, and MRI images (including T2WI, diffusion-weighted images (DWI), apparent diffusion coefficient (ADC), and DCE-MRI images) of 132 benign and breast cancer patients were analyzed retrospectively. The region of interest (ROI) in each image was marked and segmented using MATLAB software. The mammography, T2WI, DWI, ADC, and DCE-MRI models based on the ResNet34 network were trained. Using an integrated learning method, the five models were used as a basic model, and voting methods were used to construct a multimodal model. The dataset was divided into a training set and a prediction set. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the model were calculated. The diagnostic efficacy of each model was analyzed using a receiver operating characteristic curve (ROC) and an area under the curve (AUC). The diagnostic value was determined by the DeLong test with statistically significant differences set at P < 0.05.
We evaluated the ability of the model to classify benign and malignant tumors using the test set. The AUC values of the multimodal model, mammography model, T2WI model, DWI model, ADC model and DCE-MRI model were 0.943, 0.645, 0.595, 0.905, 0.900, and 0.865, respectively. The diagnostic ability of the multimodal model was significantly higher compared with that of the mammography and T2WI models. However, compared with the DWI, ADC, and DCE-MRI models, there was no significant difference in the diagnostic ability of these models.
Our deep learning model based on multimodal image training has practical value for the diagnosis of benign and malignant breast lesions.
建立一种用于鉴别乳腺良恶性病变的多模态模型。
回顾性分析 132 例乳腺良性病变和乳腺癌患者的临床资料、乳腺 X 线摄影(mammography)及 MRI 图像(包括 T2WI、扩散加权成像(DWI)、表观扩散系数(ADC)及 DCE-MRI 图像)。使用 MATLAB 软件对各图像的感兴趣区(ROI)进行标记和分割。基于 ResNet34 网络构建 mammography、T2WI、DWI、ADC 和 DCE-MRI 模型,并采用集成学习方法,将 5 种模型作为基本模型,采用投票法构建多模态模型。将数据集分为训练集和预测集,计算模型的准确率、敏感度、特异度、阳性预测值和阴性预测值,采用受试者工作特征曲线(ROC)和曲线下面积(AUC)分析各模型的诊断效能,采用 DeLong 检验判断各模型的诊断价值,以 P<0.05 为差异有统计学意义。
采用测试集评估模型对良恶性肿瘤的分类能力,多模态模型、乳腺 X 线摄影模型、T2WI 模型、DWI 模型、ADC 模型及 DCE-MRI 模型的 AUC 值分别为 0.943、0.645、0.595、0.905、0.900 和 0.865,多模态模型的诊断效能明显高于乳腺 X 线摄影和 T2WI 模型,与 DWI、ADC 和 DCE-MRI 模型比较,差异均无统计学意义。
基于多模态图像训练的深度学习模型对乳腺良恶性病变的诊断具有实用价值。