Akkaya Hüseyin, Demirel Emin, Dilek Okan, Dalgalar Akkaya Tuba, Öztürkçü Turgay, Karaaslan Erişen Kübra, Tas Zeynel Abidin, Bas Sevda, Gülek Bozkurt
Department of Radiology, Faculty of Medicine, Ondokuz Mayis University, 55280 Samsun, Turkey.
Department of Radiology, Afyonkarahisar City Training and Research Hospital, University of Health Sciences, 03030 Afyonkarahisar, Turkey.
Br J Radiol. 2025 Feb 1;98(1166):254-261. doi: 10.1093/bjr/tqae221.
To evaluate the interobserver agreement and diagnostic accuracy of ovarian-adnexal reporting and data system magnetic resonance imaging (O-RADS MRI) and applicability to machine learning.
Dynamic contrast-enhanced pelvic MRI examinations of 471 lesions were retrospectively analysed and assessed by 3 radiologists according to O-RADS MRI criteria. Radiomic data were extracted from T2 and post-contrast fat-suppressed T1-weighted images. Using these data, an artificial neural network (ANN), support vector machine, random forest, and naive Bayes models were constructed.
Among all readers, the lowest agreement was found for the O-RADS 4 group (kappa: 0.669; 95% confidence interval [CI] 0.634-0.733), followed by the O-RADS 5 group (kappa: 0.709; 95% CI 0.678-0.754). O-RADS 4 predicted a malignancy with an area under the curve (AUC) value of 74.3% (95% CI 0.701-0.782), and O-RADS 5 with an AUC of 95.5% (95% CI 0.932-0.972) (P < .001). Among the machine learning models, ANN achieved the highest success, distinguishing O-RADS groups with an AUC of 0.948, a precision of 0.861, and a recall of 0.824.
The interobserver agreement and diagnostic sensitivity of the O-RADS MRI in assigning O-RADS 4-5 were not perfect, indicating a need for structural improvement. Integrating artificial intelligence into MRI protocols may enhance their performance.
Machine learning can achieve high accuracy in the correct classification of O-RADS MRI. Malignancy prediction rates were 74% for O-RADS 4 and 95% for O-RADS 5.
评估卵巢附件报告和数据系统磁共振成像(O-RADS MRI)的观察者间一致性和诊断准确性,以及其在机器学习中的适用性。
对471个病变的动态对比增强盆腔MRI检查进行回顾性分析,并由3名放射科医生根据O-RADS MRI标准进行评估。从T2加权图像和对比剂后脂肪抑制T1加权图像中提取影像组学数据。利用这些数据构建人工神经网络(ANN)、支持向量机、随机森林和朴素贝叶斯模型。
在所有读者中,O-RADS 4组的一致性最低(kappa值:0.669;95%置信区间[CI] 0.634-0.733),其次是O-RADS 5组(kappa值:0.709;95% CI 0.678-0.754)。O-RADS 4预测恶性肿瘤的曲线下面积(AUC)值为74.3%(95% CI 0.701-0.782),O-RADS 5的AUC为95.5%(95% CI 0.932-0.972)(P < 0.00)。在机器学习模型中,ANN取得了最高的成功率,区分O-RADS组的AUC为0.948,精度为0.861,召回率为0.824。
O-RADS MRI在划分O-RADS 4-5时的观察者间一致性和诊断敏感性并不理想,表明需要进行结构改进。将人工智能整合到MRI方案中可能会提高其性能。
机器学习在O-RADS MRI的正确分类中可实现高精度。O-RADS 4的恶性预测率为74%;O-RADS 5为95%。