Cunefare David, Langlo Christopher S, Patterson Emily J, Blau Sarah, Dubra Alfredo, Carroll Joseph, Farsiu Sina
Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA.
Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA.
Biomed Opt Express. 2018 Jul 18;9(8):3740-3756. doi: 10.1364/BOE.9.003740. eCollection 2018 Aug 1.
Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading.
在用于视网膜疾病研究、诊断和预后的自适应光学扫描激光检眼镜(AOSLO)系统的临床应用中,快速可靠地量化视锥光感受器是一个瓶颈。迄今为止,人工分级一直是AOSLO量化的唯一可靠来源,因为在患病视网膜的实际低质量图像中,尚无自动方法可可靠地用于检测视锥细胞。我们提出了一种基于深度学习的新方法,该方法结合了共焦和非共焦分离探测器AOSLO模式的信息,以检测色盲患者的视锥细胞。我们基于深度学习的双模式方法优于当前最先进的自动化技术,并且与人工分级相当。