Dumitrascu Oana M, Li Xin, Zhu Wenhui, Woodruff Bryan K, Nikolova Simona, Sobczak Jacob, Youssef Amal, Saxena Siddhant, Andreev Janine, Caselli Richard J, Chen John J, Wang Yalin
Department of Neurology, Mayo Clinic, Scottsdale, AZ; Department of Ophthalmology, Mayo Clinic, Scottsdale, AZ.
School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ.
Mayo Clin Proc Digit Health. 2024 Dec;2(4):548-558. doi: 10.1016/j.mcpdig.2024.08.005. Epub 2024 Aug 26.
To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).
Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net-based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models' performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features.
The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; <.001) and our institutional testing data sets (98.90% vs 94.17%; =.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making.
A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net-pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.
报告专门基于视网膜彩色眼底照片训练的两种不同深度学习模型用于阿尔茨海默病(AD)分类的开发和性能。
在2021年4月1日至2024年1月30日期间,使用来自AD患者和对照的两个独立的高质量视网膜照片数据集(英国生物银行和我们的三级学术机构)构建两种深度学习模型。ADVAS是一种基于U-Net的架构,使用视网膜血管分割。ADRET是一种双向编码器表示形式的自监督学习卷积神经网络,在来自英国生物银行的大量视网膜彩色照片数据集上进行预训练。使用平均准确率、灵敏度、特异性和接受操作曲线来确定模型区分AD与非AD的性能。对生成的注意力热图进行独特特征分析。
在英国生物银行(98.27%对77.20%;<.001)和我们机构的测试数据集中(98.90%对94.17%;=.04),自监督的ADRET模型与ADVAS相比具有更高的准确率。在原始血管分割和二元血管分割之间以及双眼模型与单眼模型之间未发现重大差异。从AD患者获得的注意力热图突出显示小血管分支周围区域是与模型决策最相关的区域。
仅在大量视网膜彩色照片数据集上进行预训练的双向编码器表示形式的自监督卷积神经网络能够高精度地筛查有症状的AD,优于基于U-Net预训练的模型。要在临床实践中应用,这种方法需要在更大且多样化的人群中进一步验证,并需要综合技术来协调眼底照片并减少与成像相关的噪声。