Suppr超能文献

在资源匮乏地区利用人工智能进行早产儿视网膜病变筛查。

AI-Enabled Screening for Retinopathy of Prematurity in Low-Resource Settings.

作者信息

Ortiz Anthony, Patiño Susana, Torres Jehú, Mármol Juan, Serafin Carlos, Dodhia Rahul, Saidman Gabriela, Schbib Vanina, Peña Brenda, Monteoliva Guillermo, Martinez-Castellanos María Ana, Weeks William B, Lavista Ferres Juan M

机构信息

Microsoft AI for Good Lab, Redmond, Washington.

Business Data Evolution, Mexico City, Mexico.

出版信息

JAMA Netw Open. 2025 Apr 1;8(4):e257831. doi: 10.1001/jamanetworkopen.2025.7831.

Abstract

IMPORTANCE

Retinopathy of prematurity (ROP) is the leading cause of preventable childhood blindness worldwide. If detected and treated early, ROP-associated blindness is preventable; however, identifying patients who might respond to treatment requires screening over time, which is challenging in low-resource settings where access to pediatric ophthalmologists and pediatric ocular imaging cameras is limited.

OBJECTIVE

To develop and assess the performance of a machine learning algorithm that uses smartphone-collected videos to perform retinal screening for ROP in low-resource settings.

DESIGN, SETTING, AND PARTICIPANTS: This diagnostic study used smartphone-obtained videos of fundi in premature neonates with and without ROP in Mexico and Argentina between May 12, 2020, and October 31, 2023. Machine-learning (ML)-driven algorithms were developed to process a video, identify the best frames within the video, and use those frames to determine whether ROP was likely or not. Eligible neonates born with gestational age less than 36 weeks or birth weight less than 1500 g were included on the study.

EXPOSURES

An ML algorithm applied to a smartphone-obtained video.

MAIN OUTCOMES AND MEASURES

The ML algorithms' ability to identify high-quality retinal images and classify those images as indicating ROP or not at the frame and patient levels, measured by accuracy, specificity, and sensitivity, compared with classifications from 3 pediatric ophthalmologists.

RESULTS

A total of 524 videos were collected for 512 neonates with median gestational age of 32 weeks (range, 25-36 weeks) and median birth weight of 1610 g (range, 580-2800 g). The frame selection model identified high-quality retinal images from 397 of 456 videos (87.1%; 95% CI, 84.0%-90.1%) reserved for testing model performance. Across all test videos, 97.4% (95% CI, 96.7%-98.1%) of high-quality retinal images selected by the model contained fundus images. At the frame level, the ROP classifier model had a sensitivity of 76.7% (95% CI, 69.9%-83.5%); at the patient level, the classifier model had a sensitivity of 93.3% (95% CI, 86.4%-100%). At both levels, the model's sensitivity was higher than that for the panel of pediatric ophthalmologists (frame level: 71.4% [95% CI, 64.1%-78.7%]; patient level: 73.3% [95% CI, 61.0%-85.6%]). Specificity and accuracy were higher for ophthalmologist classification vs the ML model.

CONCLUSIONS AND RELEVANCE

In this diagnostic study, a process that used smartphone-collected videos of premature neonates' fundi to determine whether high-quality retinal images were present had high sensitivity to classify such images as indicating or not indicating ROP but lower specificity and accuracy than ophthalmologist assessment. This process costs a fraction of the current process for retinal image collection and classification and could be used to expand access to ROP screening in low-resource settings, with potential to help prevent the most common cause of preventable childhood blindness.

摘要

重要性

早产儿视网膜病变(ROP)是全球可预防儿童失明的主要原因。如果能早期发现并治疗,与ROP相关的失明是可以预防的;然而,识别可能对治疗有反应的患者需要长期筛查,这在资源匮乏地区具有挑战性,因为那里获得儿科眼科医生和儿科眼部成像相机的机会有限。

目的

开发并评估一种机器学习算法的性能,该算法使用智能手机收集的视频在资源匮乏地区对ROP进行视网膜筛查。

设计、设置和参与者:这项诊断研究使用了2020年5月12日至2023年10月31日期间在墨西哥和阿根廷获得的有或无ROP的早产儿眼底的智能手机视频。开发了机器学习(ML)驱动的算法来处理视频,识别视频中的最佳帧,并使用这些帧来确定是否可能患有ROP。纳入研究的合格新生儿为孕周小于36周或出生体重小于1500克的早产儿。

暴露因素

应用于智能手机获取视频的ML算法。

主要结局和测量指标

与3名儿科眼科医生的分类相比,通过准确性、特异性和敏感性来衡量ML算法在帧和患者层面识别高质量视网膜图像并将这些图像分类为指示或不指示ROP的能力。

结果

共收集了512名新生儿的524个视频,中位孕周为32周(范围25 - 36周),中位出生体重为1610克(范围580 - 2800克)。帧选择模型从456个用于测试模型性能的视频中的397个(87.1%;95%CI,84.0% - 90.1%)中识别出高质量视网膜图像。在所有测试视频中,模型选择的高质量视网膜图像中有97.4%(95%CI,96.7% - 98.1%)包含眼底图像。在帧层面,ROP分类器模型的敏感性为76.7%(95%CI,69.9% - 83.5%);在患者层面,分类器模型的敏感性为93.3%(95%CI,86.4% - 100%)。在两个层面上,模型的敏感性均高于儿科眼科医生小组(帧层面:71.4%[95%CI,6

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e229/12042057/7b4fdf90f66b/jamanetwopen-e257831-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验