Veturi Yoga Advaith, McNamara Steve, Kinder Scott, Clark Christopher William, Thakuria Upasana, Bearce Benjamin, Manoharan Niranjan, Mandava Naresh, Kahook Malik Y, Singh Praveer, Kalpathy-Cramer Jayashree
Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, Colorado.
Ophthalmol Sci. 2024 Nov 28;5(2):100664. doi: 10.1016/j.xops.2024.100664. eCollection 2025 Mar-Apr.
Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, "EyeLiner," for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes.
EyeLiner registers a "moving" image to a "fixed" image using a DL-based keypoint matching algorithm.
We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS).
Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints.
We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study.
EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUC = 0.76, AUC = 0.83, AUC = 0.74).
Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time.
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
检测和测量眼底纵向成像的变化是监测青光眼和黄斑变性等慢性眼科疾病病情进展的关键。临床医生通过独立审查或手动并列纵向获取的彩色眼底照片(CFP)来评估疾病状态的变化。区分由于相机方向、变焦和曝光导致的图像采集差异与真正的疾病相关变化可能具有挑战性。这使得手动图像评估具有变异性和主观性,可能会影响临床决策。我们引入了深度学习(DL)管道“EyeLiner”,用于配准或对齐二维CFP。纵向图像对的更好对齐可以补偿由于相机方向导致的差异,同时保留病理变化。
EyeLiner使用基于深度学习的关键点匹配算法将“移动”图像配准到“固定”图像。
我们在3个纵向数据集上评估EyeLiner:眼底图像配准(FIRE)、青光眼预测序列图像(SIGF)以及我们来自科罗拉多眼科研究信息系统(CORIS)的内部青光眼数据集。
使用卷积神经网络从移动图像和固定图像中检测沿视网膜血管的解剖关键点,随后使用基于Transformer的算法进行匹配。最后,使用相应的关键点学习变换参数。
我们计算了固定图像和配准后的移动图像中手动标注关键点之间的平均距离(MD)。为了与现有的最先进的视网膜配准方法进行比较,我们使用了FIRE数据集研究中引入的曲线下平均面积(AUC)指标。
通过配准棋盘格和闪烁动画进行定性评估,EyeLiner有效地对齐了来自FIRE、SIGF和CORIS的纵向图像对。定量结果表明,对于该模型,对齐后FIRE的MD从321.32像素降至3.74像素,CORIS从9.86像素降至2.03像素,SIGF从25.23像素降至5.94像素。我们在FIRE、CORIS和SIGF上分别获得了0.85、0.94和0.84的AUC,超过了当前最先进的SuperRetina(AUC = 0.76、AUC = 0.83、AUC = 0.74)。
与当前最先进的方法相比,我们的管道在3个独立数据集上展示了图像对更好的对齐效果。我们设想这种方法将使临床医生能够对齐图像对,并更好地可视化疾病随时间的变化。
本文末尾的脚注和披露中可能会找到专有或商业披露信息。