Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea.
Comput Methods Programs Biomed. 2020 Dec;197:105761. doi: 10.1016/j.cmpb.2020.105761. Epub 2020 Sep 16.
Retinal imaging has two major modalities, traditional fundus photography (TFP) and ultra-widefield fundus photography (UWFP). This study demonstrates the feasibility of a state-of-the-art deep learning-based domain transfer from UWFP to TFP.
A cycle-consistent generative adversarial network (CycleGAN) was used to automatically translate the UWFP to the TFP domain. The model was based on an unpaired dataset including anonymized 451 UWFP and 745 TFP images. To apply CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. After automated image registration and masking dark frames, the generator and discriminator networks were trained. Additional twelve publicly available paired TFP and UWFP images were used to calculate the intensity histograms and structural similarity (SSIM) indices.
We observed that all UWFP images were successfully translated into TFP-style images by CycleGAN, and the main structural information of the retina and optic nerve was retained. The model did not generate fake features in the output images. Average histograms demonstrated that the intensity distribution of the generated output images provided a good match to the ground truth images, with an average SSIM level of 0.802.
Our approach enables automated synthesis of TFP images directly from UWFP without a manual pre-conditioning process. The generated TFP images might be useful for clinicians in investigating posterior pole and for researchers in integrating TFP and UWFP databases. This is also likely to save scan time and will be more cost-effective for patients by avoiding additional examinations for an accurate diagnosis.
视网膜成像有两种主要方式,传统眼底照相术(TFP)和超广角眼底照相术(UWFP)。本研究旨在展示一种基于深度学习的最新技术,从 UWFP 到 TFP 的域转换的可行性。
采用循环一致性生成对抗网络(CycleGAN)自动将 UWFP 转换为 TFP 域。该模型基于包括 451 张匿名 UWFP 和 745 张 TFP 图像的未配对数据集。为了将 CycleGAN 应用于独立数据集,我们将数据随机分为训练(90%)和测试(10%)数据集。在自动进行图像配准和屏蔽暗帧后,训练生成器和判别器网络。另外还使用了 12 张公开的配对 TFP 和 UWFP 图像来计算强度直方图和结构相似性(SSIM)指数。
我们观察到 CycleGAN 成功地将所有 UWFP 图像转换为 TFP 风格的图像,保留了视网膜和视神经的主要结构信息。模型在输出图像中没有生成虚假特征。平均直方图表明,生成输出图像的强度分布与真实图像很好地匹配,平均 SSIM 水平为 0.802。
我们的方法无需手动预处理过程即可自动从 UWFP 合成 TFP 图像。生成的 TFP 图像可能对研究后极部的临床医生有用,也有助于整合 TFP 和 UWFP 数据库的研究人员。这还有可能节省扫描时间,避免为了准确诊断而进行额外的检查,从而为患者节省成本。