Suppr超能文献

多模态 AI 结合临床和影像学输入提高前列腺癌检测能力。

Multimodal AI Combining Clinical and Imaging Inputs Improves Prostate Cancer Detection.

机构信息

From the Department of Radiology, Medical Imaging Center, University Medical Center Groningen, Groningen, the Netherlands (C.R., D.Y., D.I.R.S., S.J.F., T.C.K.); Department of Radiology, Netherlands Cancer Center Antoni van Leeuwenhoek, Amsterdam, the Netherlands (D.Y.); Department of Radiology, Radboud University Medical Center, Nijmegen, the Netherlands (J.S.B., H.H.); and Department of Radiology, Martini Ziekenhuis Groningen, Groningen, the Netherlands (D.B.R.).

出版信息

Invest Radiol. 2024 Dec 1;59(12):854-860. doi: 10.1097/RLI.0000000000001102. Epub 2024 Jul 29.

Abstract

OBJECTIVES

Deep learning (DL) studies for the detection of clinically significant prostate cancer (csPCa) on magnetic resonance imaging (MRI) often overlook potentially relevant clinical parameters such as prostate-specific antigen, prostate volume, and age. This study explored the integration of clinical parameters and MRI-based DL to enhance diagnostic accuracy for csPCa on MRI.

MATERIALS AND METHODS

We retrospectively analyzed 932 biparametric prostate MRI examinations performed for suspected csPCa (ISUP ≥2) at 2 institutions. Each MRI scan was automatically analyzed by a previously developed DL model to detect and segment csPCa lesions. Three sets of features were extracted: DL lesion suspicion levels, clinical parameters (prostate-specific antigen, prostate volume, age), and MRI-based lesion volumes for all DL-detected lesions. Six multimodal artificial intelligence (AI) classifiers were trained for each combination of feature sets, employing both early (feature-level) and late (decision-level) information fusion methods. The diagnostic performance of each model was tested internally on 20% of center 1 data and externally on center 2 data (n = 529). Receiver operating characteristic comparisons determined the optimal feature combination and information fusion method and assessed the benefit of multimodal versus unimodal analysis. The optimal model performance was compared with a radiologist using PI-RADS.

RESULTS

Internally, the multimodal AI integrating DL suspicion levels with clinical features via early fusion achieved the highest performance. Externally, it surpassed baselines using clinical parameters (0.77 vs 0.67 area under the curve [AUC], P < 0.001) and DL suspicion levels alone (AUC: 0.77 vs 0.70, P = 0.006). Early fusion outperformed late fusion in external data (0.77 vs 0.73 AUC, P = 0.005). No significant performance gaps were observed between multimodal AI and radiologist assessments (internal: 0.87 vs 0.88 AUC; external: 0.77 vs 0.75 AUC, both P > 0.05).

CONCLUSIONS

Multimodal AI (combining DL suspicion levels and clinical parameters) outperforms clinical and MRI-only AI for csPCa detection. Early information fusion enhanced AI robustness in our multicenter setting. Incorporating lesion volumes did not enhance diagnostic efficacy.

摘要

目的

在磁共振成像(MRI)上检测临床上显著的前列腺癌(csPCa)的深度学习(DL)研究往往忽略了潜在相关的临床参数,如前列腺特异性抗原、前列腺体积和年龄。本研究探讨了将临床参数与基于 MRI 的 DL 相结合,以提高 MRI 上 csPCa 的诊断准确性。

材料与方法

我们回顾性分析了 2 家机构进行的 932 例疑似 csPCa(ISUP≥2)的双参数前列腺 MRI 检查。每个 MRI 扫描均由之前开发的 DL 模型自动分析,以检测和分割 csPCa 病变。提取了三组特征:DL 病变可疑水平、临床参数(前列腺特异性抗原、前列腺体积、年龄)和所有 DL 检测到的病变的基于 MRI 的病变体积。为每个特征集组合训练了 6 个多模态人工智能(AI)分类器,采用早期(特征级)和晚期(决策级)信息融合方法。在中心 1 数据的 20%上对每个模型的诊断性能进行内部测试,并在中心 2 数据(n=529)上进行外部测试。接收者操作特征比较确定了最佳特征组合和信息融合方法,并评估了多模态与单模态分析的优势。将最佳模型性能与使用 PI-RADS 的放射科医生进行比较。

结果

内部,通过早期融合将 DL 可疑度与临床特征相结合的多模态 AI 实现了最高的性能。在外部,它优于使用临床参数(AUC:0.77 比 0.67,P<0.001)和单独使用 DL 可疑度的基线(AUC:0.77 比 0.70,P=0.006)。在外部数据中,早期融合优于晚期融合(AUC:0.77 比 0.73,P=0.005)。在内部(AUC:0.87 比 0.88)和外部(AUC:0.77 比 0.75)数据中,多模态 AI 和放射科医生评估之间没有明显的性能差距(均 P>0.05)。

结论

多模态 AI(结合 DL 可疑度和临床参数)在检测 csPCa 方面优于临床和 MRI 单模态 AI。早期信息融合增强了我们多中心环境中 AI 的稳健性。纳入病变体积并未提高诊断效果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验