Suppr超能文献

基于人工智能的胸片心功能分类模型:多机构、回顾性模型开发和验证研究。

Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study.

机构信息

Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.

Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.

出版信息

Lancet Digit Health. 2023 Aug;5(8):e525-e533. doi: 10.1016/S2589-7500(23)00107-3. Epub 2023 Jul 6.

Abstract

BACKGROUND

Chest radiography is a common and widely available examination. Although cardiovascular structures-such as cardiac shadows and vessels-are visible on chest radiographs, the ability of these radiographs to estimate cardiac function and valvular disease is poorly understood. Using datasets from multiple institutions, we aimed to develop and validate a deep-learning model to simultaneously detect valvular disease and cardiac functions from chest radiographs.

METHODS

In this model development and validation study, we trained, validated, and externally tested a deep learning-based model to classify left ventricular ejection fraction, tricuspid regurgitant velocity, mitral regurgitation, aortic stenosis, aortic regurgitation, mitral stenosis, tricuspid regurgitation, pulmonary regurgitation, and inferior vena cava dilation from chest radiographs. The chest radiographs and associated echocardiograms were collected from four institutions between April 1, 2013, and Dec 31, 2021: we used data from three sites (Osaka Metropolitan University Hospital, Osaka, Japan; Habikino Medical Center, Habikino, Japan; and Morimoto Hospital, Osaka, Japan) for training, validation, and internal testing, and data from one site (Kashiwara Municipal Hospital, Kashiwara, Japan) for external testing. We evaluated the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy.

FINDINGS

We included 22 551 radiographs associated with 22 551 echocardiograms obtained from 16 946 patients. The external test dataset featured 3311 radiographs from 2617 patients with a mean age of 72 years [SD 15], of whom 49·8% were male and 50·2% were female. The AUCs, accuracy, sensitivity, and specificity for this dataset were 0·92 (95% CI 0·90-0·95), 86% (85-87), 82% (75-87), and 86% (85-88) for classifying the left ventricular ejection fraction at a 40% cutoff, 0·85 (0·83-0·87), 75% (73-76), 83% (80-87), and 73% (71-75) for classifying the tricuspid regurgitant velocity at a 2·8 m/s cutoff, 0·89 (0·86-0·92), 85% (84-86), 82% (76-87), and 85% (84-86) for classifying mitral regurgitation at the none-mild versus moderate-severe cutoff, 0·83 (0·78-0·88), 73% (71-74), 79% (69-87), and 72% (71-74) for classifying aortic stenosis, 0·83 (0·79-0·87), 68% (67-70), 88% (81-92), and 67% (66-69) for classifying aortic regurgitation, 0·86 (0·67-1·00), 90% (89-91), 83% (36-100), and 90% (89-91) for classifying mitral stenosis, 0·92 (0·89-0·94), 83% (82-85), 87% (83-91), and 83% (82-84) for classifying tricuspid regurgitation, 0·86 (0·82-0·90), 69% (68-71), 91% (84-95), and 68% (67-70) for classifying pulmonary regurgitation, and 0·85 (0·81-0·89), 86% (85-88), 73% (65-81), and 87% (86-88) for classifying inferior vena cava dilation.

INTERPRETATION

The deep learning-based model can accurately classify cardiac functions and valvular heart diseases using information from digital chest radiographs. This model can classify values typically obtained from echocardiography in a fraction of the time, with low system requirements and the potential to be continuously available in areas where echocardiography specialists are scarce or absent.

FUNDING

None.

摘要

背景

胸部 X 线摄影是一种常见且广泛应用的检查方法。尽管心血管结构(如心脏阴影和血管)在胸部 X 光片上可见,但这些 X 光片评估心脏功能和瓣膜疾病的能力尚未得到充分理解。本研究旨在利用来自多个机构的数据集,开发和验证一种深度学习模型,以同时从胸部 X 光片中检测瓣膜疾病和心脏功能。

方法

在这项模型开发和验证研究中,我们训练、验证和外部测试了一种基于深度学习的模型,以从胸部 X 光片中分类左心室射血分数、三尖瓣反流速度、二尖瓣反流、主动脉瓣狭窄、主动脉瓣反流、二尖瓣狭窄、三尖瓣反流、肺动脉瓣反流和下腔静脉扩张。胸部 X 光片和相关的超声心动图数据来自 2013 年 4 月 1 日至 2021 年 12 月 31 日的四个机构:我们使用三个地点(日本大阪市立大学医院、日本吹田市滨坂医疗中心和日本大阪市守口医院)的数据进行训练、验证和内部测试,使用一个地点(日本柏原市柏原市立医院)的数据进行外部测试。我们评估了受试者工作特征曲线下的面积(AUC)、敏感性、特异性和准确性。

发现

我们纳入了 22551 张与 22551 份超声心动图相关的 X 光片,这些 X 光片来自 16946 名患者。外部测试数据集包含 3311 张来自 2617 名患者的 X 光片,患者平均年龄为 72 岁[标准差 15],其中 49.8%为男性,50.2%为女性。该数据集的 AUC、准确性、敏感性和特异性分别为 0.92(95%CI 0.90-0.95)、86%(85-87)、82%(75-87)和 86%(85-88),用于在 40%的截定点分类左心室射血分数;0.85(0.83-0.87)、75%(73-76)、83%(80-87)和 73%(71-75),用于在 2.8 m/s 的截定点分类三尖瓣反流速度;0.89(0.86-0.92)、85%(84-86)、82%(76-87)和 85%(84-86),用于在无-轻度与中度-重度的截定点分类二尖瓣反流;0.83(0.78-0.88)、73%(71-74)、79%(69-87)和 72%(71-74),用于在主动脉瓣狭窄的截定点分类;0.83(0.79-0.87)、68%(67-70)、88%(81-92)和 67%(66-69),用于在主动脉瓣反流的截定点分类;0.86(0.67-1.00)、90%(89-91)、83%(36-100)和 90%(89-91),用于在二尖瓣狭窄的截定点分类;0.92(0.89-0.94)、83%(82-85)、87%(83-91)和 83%(82-84),用于在三尖瓣反流的截定点分类;0.86(0.82-0.90)、69%(68-71)、91%(84-95)和 68%(67-70),用于在肺动脉瓣反流的截定点分类;0.85(0.81-0.89)、86%(85-88)、73%(65-81)和 87%(86-88),用于在腔静脉扩张的截定点分类。

解释

基于深度学习的模型可以使用数字胸部 X 光片信息准确分类心脏功能和瓣膜性心脏病。该模型可以在更短的时间内分类通常从超声心动图获得的值,具有较低的系统要求,并且有可能在超声心动图专家稀缺或不存在的地区持续提供服务。

资金

无。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验