Helal Maha, Khaled Rana, Alfarghaly Omar, Mokhtar Omnia, Elkorany Abeer, Fahmy Aly, El Kassas Hebatalla
Radiology Department, National Cancer Institute, Cairo University, Cairo 11796, Egypt.
Computer Science Department, Computers and Artificial Intelligence, Cairo University, Cairo 12613, Egypt.
Eur J Radiol. 2024 Apr;173:111392. doi: 10.1016/j.ejrad.2024.111392. Epub 2024 Feb 23.
Contrast-enhanced mammography (CEM) is used for characterization of breast lesions with increased diagnostic accuracy compared to digital mammography (DM). Artificial intelligence (AI) approaches are emerging with accuracies equal to an average radiologist. However, most studies trained deep learning (DL) models on DM images and there is a paucity in literature for discovering the application of AI using CEM.
To develop and test a DL model that classifies CEM images and produces corresponding highlights of lesions detected.
Fully annotated 2006 images of 326 females available from the previously published Categorized Digital Database for Contrast Enhanced Mammography images (CDD-CESM) were used for training. We developed a DL multiview contrast mammography model (MVCM) for classification of CEM low energy and recombined images. An external test set of 288 images of 37 females not included in the training was used for validation. Correlation with histopathological results and follow-up was considered the standard reference. The study protocol was approved by the Institutional Review Board and patient informed consent was obtained.
Assessment was done on an external test set of 37 females (mean age, 51.31 years ± 11.07 [SD]) with AUC-ROC for AI performance 0.936; (95 % CI: 0.898, 0.973; p < 0.001) and the best cut off value for prediction of malignancy using AI score = 0.28. Findings were then correlated with histopathological results and follow up which revealed a sensitivity of 75 %, specificity 96.3 %, total accuracy of 90.1 %, positive predictive value (PPV) 87.1 %, and negative predictive value (NPV) 92 %, p-value (<0.001). Diagnostic indices of radiologists were sensitivity 88.9 %, specificity 92.6 %, total accuracy 91.7 %, PPV 80 %, and NPV 96.2 %, p-value (<0.001).
A deep learning multiview CEM model was developed and evaluated in a cohort of female participants and showed promising results in detecting breast cancer. This warrants further studies, external training, and validation.
与数字乳腺摄影(DM)相比,对比增强乳腺摄影(CEM)用于乳腺病变的特征描述,诊断准确性更高。人工智能(AI)方法正在兴起,其准确性与普通放射科医生相当。然而,大多数研究在DM图像上训练深度学习(DL)模型,而关于利用CEM开展AI应用的文献较少。
开发并测试一个对CEM图像进行分类并生成所检测病变相应突出显示的DL模型。
使用先前发布的对比增强乳腺摄影图像分类数字数据库(CDD-CESM)中提供的326名女性的2006张完全注释图像进行训练。我们开发了一种用于对CEM低能量图像和重组图像进行分类的DL多视图对比乳腺摄影模型(MVCM)。使用一组未纳入训练的37名女性的288张图像作为外部测试集进行验证。与组织病理学结果和随访的相关性被视为标准参考。该研究方案已获机构审查委员会批准,并获得了患者知情同意。
对37名女性(平均年龄51.31岁±11.07[标准差])的外部测试集进行评估,AI性能的AUC-ROC为0.936;(95%置信区间:0.898,0.973;p<0.001),使用AI评分预测恶性肿瘤的最佳截断值为0.28。然后将结果与组织病理学结果和随访进行关联,结果显示敏感性为75%,特异性为96.3%,总准确率为90.1%,阳性预测值(PPV)为87.1%,阴性预测值(NPV)为92%,p值(<0.001)。放射科医生的诊断指标为敏感性88.9%,特异性92.6%,总准确率91.7%,PPV 80%,NPV 96.2%,p值(<0.001)。
开发了一种深度学习多视图CEM模型,并在一组女性参与者中进行了评估,在检测乳腺癌方面显示出有前景的结果。这需要进一步的研究、外部训练和验证。