Suppr超能文献

从数据柠檬中榨取人工智能柠檬水:公共心尖回波数据库的改编,用于创建胸骨下视觉估计自动射血分数机器学习算法。

Making Artificial Intelligence Lemonade Out of Data Lemons: Adaptation of a Public Apical Echo Database for Creation of a Subxiphoid Visual Estimation Automatic Ejection Fraction Machine Learning Algorithm.

机构信息

Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA.

Department of Emergency Medicine, St. Francis Hospital, Columbus, GA, USA.

出版信息

J Ultrasound Med. 2022 Aug;41(8):2059-2069. doi: 10.1002/jum.15889. Epub 2021 Nov 24.

Abstract

OBJECTIVES

A paucity of point-of-care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4-chamber (A4C) images.

METHODS

Researchers used a long-short-term-memory algorithm for image analysis. Using the Stanford EchoNet-Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real-world test, we obtained 615 SX videos from Harbor-UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland-Altman analyses.

RESULTS

The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8-23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5-16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3-16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland-Altman showed weakest agreement at 40-45% EF.

CONCLUSIONS

Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.

摘要

目的

即时超声心动图(POCUS)数据库的缺乏限制了机器学习(ML)的发展。评估仅使用心尖四腔(A4C)图像从剑突下(SX)窗口训练 ML 算法来视觉估计左心室射血分数(EF)的可行性。

方法

研究人员使用长短期记忆算法进行图像分析。使用斯坦福回声网络动态数据库中的 10036 个 A4C 视频,这些视频具有计算出的精确 EF,研究人员测试了 3 种 ML 训练渗透。首先,在未改变的斯坦福 A4C 视频上进行训练,然后在未改变的和顺时针 90°旋转的视频上进行训练,最后在未改变的、顺时针 90°旋转的和水平翻转的视频上进行训练。作为真实世界的测试,我们从 Harbor-UCLA(HUCLA)获得了 615 个 SX 视频,这些视频的 EF 计算在 5%的范围内。研究人员在 HUCLA EF 范围内对 EF 点估计进行了 1000 次随机化,以补偿 ML 和 HUCLA EF 的不匹配,从而获得绝对误差(MAE)比较的平均值,并进行了 Bland-Altman 分析。

结果

ML 算法 EF 的平均 MAE 估计值为 23.0,使用未改变的 A4C 视频的范围为 22.8-23.3,使用未改变的和顺时针 90°旋转的视频的平均 MAE 为 16.7,范围为 16.5-16.9,使用未改变的、顺时针 90°旋转的和水平翻转的视频的平均 MAE 为 16.6,范围为 16.3-16.8。Bland-Altman 分析显示在 EF 为 40-45%时一致性最差。

结论

研究人员成功地将不相关的超声窗口数据改编为使用数据处理来训练 POCUS ML 算法,以模拟不同的超声检查,从而获得公平的 MAE。这对于未来的 POCUS 算法设计可能很重要,可以帮助克服 POCUS 数据库的缺乏。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验