Suppr超能文献

通过整合放射科报告信息来提高深度学习在可解释脑 MRI 病变检测中的性能。

Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.

机构信息

From the Institute of Diagnostic and Interventional Radiology (L.D., Z.S., H.D., J.J., D.W., G.T., X.S., J.Z., Q.Z., Y.L.) and Clinical Research Center (J.W.), Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, 600 Yishan Road, Shanghai 200000, China; Shanghai AI Laboratory, Shanghai, China (J.L., Y.Z.); School of Computer Science and Technology, University of Science and Technology of China, Anhui, China (J.L.); The Pennsylvania State University College of Information Sciences and Technology, University Park, Pa (F.M.); Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China (H.Z.); Department of Radiology, Affiliated Hospital of Nantong University, Nantong, China (J.J.); Department of Radiology, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China (S.A.); Department of Radiology, Shanghai Public Health Clinical Center, Shanghai, China (A.S.); Department of Radiology, Wuhan Hankou Hospital, Wuhan, China (Z.L.); and Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China (Y.Z.).

出版信息

Radiol Artif Intell. 2024 Nov;6(6):e230520. doi: 10.1148/ryai.230520.

Abstract

Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI Published under a CC BY 4.0 license.

摘要

目的 通过将放射学报告中提取的文本特征整合到深度学习(DL)模型中,指导模型关注脑部病变的 MRI 特征,从而实现可解释的病变检测。

材料与方法 本回顾性研究使用了中心 1 的 35282 份脑部 MRI 扫描(2018 年 1 月至 2023 年 6 月)和相应的放射学报告进行训练、验证和内部测试。中心 2-5 的 2655 份脑部 MRI 扫描(2022 年 1 月至 2022 年 12 月)保留用于外部测试。从放射学报告中提取文本特征,指导专注于病变特征的 DL 模型(ReportGuidedNet)。还开发了一个没有文本特征的另一个 DL 模型(PlainNet)用于比较分析。两个模型都识别了 15 种情况,包括 14 种疾病和正常大脑。通过计算接收器操作特征曲线(ROC)下的面积的宏平均值(ma-AUC)和微平均值(mi-AUC)来评估每个模型的性能。使用五点李克特量表评估注意力图,即可视化模型注意力的图像。

结果 在内部和外部测试中,ReportGuidedNet 均优于 PlainNet 用于所有诊断。在内部测试中(ma-AUC,0.93 [95% CI:0.91,0.95] 与 0.85 [95% CI:0.81,0.88];mi-AUC,0.93 [95% CI:0.90,0.95] 与 0.89 [95% CI:0.83,0.92])和外部测试中(ma-AUC,0.91 [95% CI:0.88,0.93] 与 0.75 [95% CI:0.72,0.79];mi-AUC,0.90 [95% CI:0.87,0.92] 与 0.76 [95% CI:0.72,0.80])。ReportGuidedNet 的内部和外部测试性能差异比 PlainNet 小(Δma-AUC,0.03 与 0.10;Δmi-AUC,0.02 与 0.13)。ReportGuidedNet 的李克特量表评分高于 PlainNet(平均值 ± 标准差:2.50 ± 1.09 与 1.32 ± 1.20;<.001)。

结论 通过整合放射学报告的文本特征,提高了深度学习模型检测脑部病变的能力,从而提高了可解释性和泛化能力。

深度学习,计算机辅助诊断,知识驱动模型,放射学报告,脑部 MRI。

在知识共享署名 4.0 许可下发布。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f2a8/11605145/e0d1da700243/ryai.230520.fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验