Suppr超能文献

基于 CT 影像和电子健康记录的深度学习多模态融合方法:肺栓塞检测的案例研究。

Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: a case-study in pulmonary embolism detection.

机构信息

Department of Biomedical Data Science, Stanford University, Stanford, USA.

Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, USA.

出版信息

Sci Rep. 2020 Dec 17;10(1):22147. doi: 10.1038/s41598-020-78888-w.

Abstract

Recent advancements in deep learning have led to a resurgence of medical imaging and Electronic Medical Record (EMR) models for a variety of applications, including clinical decision support, automated workflow triage, clinical prediction and more. However, very few models have been developed to integrate both clinical and imaging data, despite that in routine practice clinicians rely on EMR to provide context in medical imaging interpretation. In this study, we developed and compared different multimodal fusion model architectures that are capable of utilizing both pixel data from volumetric Computed Tomography Pulmonary Angiography scans and clinical patient data from the EMR to automatically classify Pulmonary Embolism (PE) cases. The best performing multimodality model is a late fusion model that achieves an AUROC of 0.947 [95% CI: 0.946-0.948] on the entire held-out test set, outperforming imaging-only and EMR-only single modality models.

摘要

深度学习的最新进展使得医学成像和电子病历 (EMR) 模型在各种应用中得到了复苏,包括临床决策支持、自动化工作流程分诊、临床预测等。然而,尽管在常规实践中,临床医生依赖 EMR 为医学成像解释提供上下文,但很少有模型被开发出来以整合临床和成像数据。在这项研究中,我们开发并比较了不同的多模态融合模型架构,这些架构能够利用容积计算机断层肺动脉造影扫描的像素数据和来自 EMR 的临床患者数据,自动对肺栓塞 (PE) 病例进行分类。表现最好的多模态模型是一种后期融合模型,在整个预留测试集上的 AUROC 为 0.947[95%置信区间:0.946-0.948],优于仅基于成像和仅基于 EMR 的单模态模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a715/7746687/9c4a4c83547f/41598_2020_78888_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验