Holste Gregory, Oikonomou Evangelos K, Tokodi Márton, Kovács Attila, Wang Zhangyang, Khera Rohan
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
medRxiv. 2025 Apr 16:2024.11.16.24317431. doi: 10.1101/2024.11.16.24317431.
Echocardiography is a cornerstone of cardiovascular care but relies on expert interpretation and manual reporting from a series of videos. We propose an artificial intelligence (AI) system, PanEcho, to automate echocardiogram interpretation with multi-task deep learning.
To develop and evaluate the accuracy of PanEcho on a comprehensive set of 39 echocardiographic labels and measurements on transthoracic echocardiography (TTE).
This study represents the development and retrospective, multi-site validation of an AI system. PanEcho was developed using a sample of TTE studies conducted at Yale-New Haven Health System (YNHHS) hospitals and clinics from January 2016-June 2022 during routine care. The trained model was internally validated in a temporally distinct YNHHS cohort from July-December 2022, externally validated across four diverse external cohorts, and made publicly available.
The primary outcome was the area under the receiver operating characteristic curve (AUC) for diagnostic classification tasks and mean absolute error (MAE) for parameter estimation tasks, comparing AI predictions with the assessment of the interpreting cardiologist.
This study included 1.2 million echocardiographic videos from 32,265 TTE studies of 24,405 patients across YNHHS hospitals and clinics. PanEcho performed 18 diagnostic classification tasks with a median AUC of 0.91 (IQR: 0.88-0.93) and estimated 21 echocardiographic parameters with a median normalized MAE of 0.13 (0.10-0.18) in internal validation. For instance, the model accurately estimated left ventricular (LV) ejection fraction (MAE: 4.2% internal; 4.5% external) and detected moderate or higher LV systolic dysfunction (AUC: 0.98 internal; 0.99 external), RV systolic dysfunction (0.93 internal; 0.94 external), and severe aortic stenosis (0.98 internal; 1.00 external). PanEcho maintained excellent performance in limited imaging protocols, performing 15 diagnosis tasks with 0.91 median AUC (IQR: 0.87-0.94) in an abbreviated TTE cohort and 14 tasks with 0.85 median AUC (0.77-0.87) on real-world point-of-care ultrasound acquisitions by non-experts from YNHHS emergency departments.
We report an AI system that automatically interprets echocardiograms, maintaining high accuracy across geography and time from complete and limited studies. PanEcho may be used as an adjunct reader in echocardiography labs or rapid AI-enabled screening tool in point-of-care settings.
超声心动图是心血管护理的基石,但依赖于专家解读以及对一系列视频的人工报告。我们提出了一种人工智能(AI)系统PanEcho,通过多任务深度学习实现超声心动图解读自动化。
开发并评估PanEcho在一套全面的39个经胸超声心动图(TTE)超声心动图标签和测量指标上的准确性。
设计、地点和参与者:本研究代表了一个AI系统的开发以及回顾性多中心验证。PanEcho是使用2016年1月至2022年6月在耶鲁-纽黑文医疗系统(YNHHS)医院和诊所进行的常规护理期间的TTE研究样本开发的。经过训练的模型在2022年7月至12月一个时间上不同的YNHHS队列中进行了内部验证,在四个不同的外部队列中进行了外部验证,并已公开提供。
主要结局是诊断分类任务的受试者操作特征曲线下面积(AUC)以及参数估计任务的平均绝对误差(MAE),将AI预测与解读心脏病专家的评估进行比较。
本研究纳入了YNHHS医院和诊所中24405例患者的32265项TTE研究的120万个超声心动图视频。在内部验证中,PanEcho执行了18项诊断分类任务,中位数AUC为0.91(IQR:0.88 - 0.93),并估计了21个超声心动图参数,中位数标准化MAE为0.13(0.10 - 0.18)。例如,该模型准确估计了左心室(LV)射血分数(内部MAE:4.2%;外部4.5%),并检测到中度或更高程度的LV收缩功能障碍(AUC:内部0.98;外部0.99)、右心室收缩功能障碍(内部0.93;外部0.94)和严重主动脉瓣狭窄(内部0.98;外部1.00)。PanEcho在有限的成像方案中保持了出色的性能,在一个简化的TTE队列中执行了15项诊断任务,中位数AUC为0.91(IQR:0.87 - 0.94),在YNHHS急诊科非专家进行的现场即时超声采集的实际应用中执行了14项任务,中位数AUC为0.85(0.77 - 0.87)。
我们报告了一种能自动解读超声心动图的AI系统,在完整和有限的研究中,跨地域和时间保持了高精度。PanEcho可作为超声心动图实验室的辅助解读工具或现场即时环境中基于AI的快速筛查工具。