Nielsen Christopher, Wilms Matthias, Forkert Nils D
Biomedical Engineering Graduate ProgramUniversity of Calgary Calgary AB T2N 1N4 Canada.
Department of RadiologyUniversity of Calgary Calgary AB T2N 1N4 Canada.
IEEE J Transl Eng Health Med. 2025 Jun 4;13:299-309. doi: 10.1109/JTEHM.2025.3576596. eCollection 2025.
The retinal age gap (RAG; the difference between the retina's biological and chronological age) has recently gained increased attention as a potential image-based, non-invasive, and accessible biomarker for a broad spectrum of ocular and non-ocular diseases. Traditionally, machine learning predictions of biological retinal age utilize convolutional neural network (CNN) architectures and data from color fundus photography (CFP). Despite being previously unexplored, the multimodal fusion of two-dimensional CFP with three-dimensional optical coherence tomography (OCT) data has significant potential to enhance retinal age prediction accuracy and the diagnostic utility of the RAG biomarker. Therefore, this work presents a novel foundation model-based framework for multimodal retinal age prediction. Technology or Method: Feature representations from CFP and OCT images were extracted using RETFound, a powerful foundation model for retinal image analysis. These representations were then combined using an innovative fusion strategy to train a lightweight linear regression head model for predicting retinal age. Training and evaluation of the developed multimodal retinal age prediction model was achieved using retinal images from over 80,000 participants in the UK Biobank.
The developed multimodal model sets a new benchmark in retinal age prediction (mean absolute error of 2.75 years), outperforming traditional CNN and single-modality approaches. Additionally, multimodal RAG values demonstrated superior performance in classifying patients with diabetes mellitus type 1, multiple sclerosis, and chronic kidney disease, highlighting the clinical relevance of the proposed multimodal approach for non-ocular disease detection.
This work demonstrates that multimodal fusion of CFP and OCT significantly improves retinal age prediction and subsequent RAG-based analyses. By leveraging foundation models and multimodal retinal imaging, the proposed approach enhances disease classification accuracy and demonstrates the potential of integrating the RAG into clinical workflows as a scalable, non-invasive screening tool.
The findings underscore the potential of multimodal retinal imaging to transform RAG into a clinically relevant and highly accessible biomarker for disease detection.
视网膜年龄差距(RAG;视网膜生物学年龄与实际年龄之间的差异)最近作为一种潜在的基于图像、非侵入性且易于获取的生物标志物,受到了越来越多的关注,可用于多种眼部和非眼部疾病。传统上,生物视网膜年龄的机器学习预测利用卷积神经网络(CNN)架构和彩色眼底照片(CFP)数据。尽管此前未被探索,但二维CFP与三维光学相干断层扫描(OCT)数据的多模态融合具有显著潜力,可提高视网膜年龄预测准确性以及RAG生物标志物的诊断效用。因此,这项工作提出了一种基于新型基础模型的多模态视网膜年龄预测框架。技术或方法:使用RETFound(一种用于视网膜图像分析的强大基础模型)从CFP和OCT图像中提取特征表示。然后使用创新的融合策略将这些表示进行组合,以训练用于预测视网膜年龄的轻量级线性回归头部模型。使用英国生物银行中超过80,000名参与者的视网膜图像对开发的多模态视网膜年龄预测模型进行训练和评估。
开发的多模态模型在视网膜年龄预测方面树立了新的标杆(平均绝对误差为2.75岁),优于传统的CNN和单模态方法。此外,多模态RAG值在对1型糖尿病、多发性硬化症和慢性肾病患者进行分类时表现出卓越性能,突出了所提出的多模态方法在非眼部疾病检测中的临床相关性。
这项工作表明,CFP和OCT的多模态融合显著提高了视网膜年龄预测及随后基于RAG的分析。通过利用基础模型和多模态视网膜成像,所提出的方法提高了疾病分类准确性,并展示了将RAG作为一种可扩展的非侵入性筛查工具整合到临床工作流程中的潜力。
研究结果强调了多模态视网膜成像将RAG转变为一种用于疾病检测的临床相关且易于获取的生物标志物的潜力。