Suppr超能文献

利用自动化深度学习从眼底照片预测性别。

Predicting sex from retinal fundus photographs using automated deep learning.

机构信息

NIHR Biomedical Research Center at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK.

Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.

出版信息

Sci Rep. 2021 May 13;11(1):10286. doi: 10.1038/s41598-021-89743-x.

Abstract

Deep learning may transform health care, but model development has largely been dependent on availability of advanced technical expertise. Herein we present the development of a deep learning model by clinicians without coding, which predicts reported sex from retinal fundus photographs. A model was trained on 84,743 retinal fundus photos from the UK Biobank dataset. External validation was performed on 252 fundus photos from a tertiary ophthalmic referral center. For internal validation, the area under the receiver operating characteristic curve (AUROC) of the code free deep learning (CFDL) model was 0.93. Sensitivity, specificity, positive predictive value (PPV) and accuracy (ACC) were 88.8%, 83.6%, 87.3% and 86.5%, and for external validation were 83.9%, 72.2%, 78.2% and 78.6% respectively. Clinicians are currently unaware of distinct retinal feature variations between males and females, highlighting the importance of model explainability for this task. The model performed significantly worse when foveal pathology was present in the external validation dataset, ACC: 69.4%, compared to 85.4% in healthy eyes, suggesting the fovea is a salient region for model performance OR (95% CI): 0.36 (0.19, 0.70) p = 0.0022. Automated machine learning (AutoML) may enable clinician-driven automated discovery of novel insights and disease biomarkers.

摘要

深度学习可能会改变医疗保健领域,但模型的开发在很大程度上依赖于先进技术专长的可用性。在此,我们介绍了一种由临床医生而非编码人员开发的深度学习模型,该模型可从眼底照片预测报告的性别。该模型在英国生物库数据集的 84743 张眼底照片上进行了训练。在一个三级眼科转诊中心的 252 张眼底照片上进行了外部验证。对于内部验证,无代码深度学习 (CFDL) 模型的接收者操作特征曲线 (AUROC) 的面积为 0.93。敏感性、特异性、阳性预测值 (PPV) 和准确性 (ACC) 分别为 88.8%、83.6%、87.3%和 86.5%,外部验证分别为 83.9%、72.2%、78.2%和 78.6%。目前,临床医生还不知道男性和女性之间视网膜特征的明显差异,这凸显了该任务中模型可解释性的重要性。当外部验证数据集中存在黄斑病变时,模型的性能明显下降,ACC:69.4%,而健康眼中的 ACC 为 85.4%,这表明黄斑是模型性能的一个显著区域,比值比(95%CI):0.36(0.19,0.70),p=0.0022。自动化机器学习(AutoML)可能使临床医生驱动的自动发现新的见解和疾病生物标志物成为可能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fbd/8119673/248be49b918d/41598_2021_89743_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验