Wang Yaning, Zhang Jingfeng, Li Mingyang, Miao Zheng, Wang Jing, He Kan, Yang Qi, Zhang Lei, Mu Lin, Zhang Huimao
Department of Radiology, The First Hospital of Jilin University, No.1, Xinmin Street, Changchun 130021, China (Y.W., M.L., Z.M., J.W., K.H., Q.Y., L.Z., L.M., H.Z.).
Department of Radiology, Ningbo No. 2 Hospital, Ningbo, 315010, China (J.Z.).
Acad Radiol. 2025 May;32(5):2655-2666. doi: 10.1016/j.acra.2024.11.056. Epub 2024 Dec 16.
Effective trauma care in emergency departments necessitates rapid diagnosis by interdisciplinary teams using various medical data. This study constructed a multimodal diagnostic model for abdominal trauma using deep learning on non-contrast computed tomography (CT) and unstructured text data, enhancing the speed and accuracy of solid organ assessments.
Data were collected from patients undergoing abdominal CT scans. The SMART model (Screening for Multi-organ Assessment in Rapid Trauma) classifies trauma using text data (SMART_GPT), non-contrast CT scans (SMART_Image), or both. SMART_GPT uses the GPT-4 embedding API for text feature extraction, whereas SMART_Image incorporates nnU-Net and DenseNet121 for segmentation and classification. A composite model was developed by integrating multimodal data via logistic regression of SMART_GPT, SMART_Image, and patient demographics (age and gender).
This study included 2638 patients (459 positive, 2179 negative abdominal trauma cases). A trauma-based dataset included 1006 patients with 1632 real continuous data points for testing. SMART_GPT achieved a sensitivity of 81.3% and an area under the receiver operating characteristic curve (AUC) of 0.88 based on unstructured text data. SMART_Image exhibited a sensitivity of 87.5% and an AUC of 0.81 on non-contrast CT data, with the average sensitivity exceeding 90% at the organ level. The integrated SMART model achieved a sensitivity of 93.8% and an AUC of 0.88. In emergency department simulations, SMART reduced waiting times by over 64.24%.
SMART provides rapid, objective trauma diagnostics, improving emergency care efficiency, reducing patient wait times, and enabling multimodal screening in diverse emergency contexts.
急诊科有效的创伤护理需要跨学科团队利用各种医学数据进行快速诊断。本研究利用深度学习对非增强计算机断层扫描(CT)和非结构化文本数据构建了腹部创伤的多模态诊断模型,提高了实体器官评估的速度和准确性。
收集接受腹部CT扫描患者的数据。SMART模型(快速创伤多器官评估筛查)使用文本数据(SMART_GPT)、非增强CT扫描(SMART_Image)或两者对创伤进行分类。SMART_GPT使用GPT-4嵌入应用程序编程接口进行文本特征提取,而SMART_Image结合nnU-Net和DenseNet121进行分割和分类。通过对SMART_GPT、SMART_Image和患者人口统计学特征(年龄和性别)进行逻辑回归,整合多模态数据,开发了一个复合模型。
本研究纳入2638例患者(459例腹部创伤阳性,2179例阴性)。一个基于创伤的数据集包括1006例患者,有1632个真实连续数据点用于测试。基于非结构化文本数据,SMART_GPT的灵敏度为81.3%,受试者操作特征曲线下面积(AUC)为0.88。SMART_Image在非增强CT数据上的灵敏度为87.5%,AUC为0.81,在器官水平上平均灵敏度超过90%。整合后的SMART模型灵敏度为93.8%,AUC为0.88。在急诊科模拟中,SMART将等待时间缩短了64.24%以上。
SMART提供快速、客观的创伤诊断,提高急诊护理效率,减少患者等待时间,并能在不同急诊情况下进行多模态筛查。