State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China.
Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
Transl Vis Sci Technol. 2023 Dec 1;12(12):20. doi: 10.1167/tvst.12.12.20.
The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images.
We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs.
The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from -0.09 to 0.15 and -0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation.
Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras.
Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.
本研究旨在提高青光眼视神经病变(GON)的自动诊断水平,为此我们提出了一种生成对抗网络(GAN)模型,可将 Optain 图像转换为 Topcon 图像。
我们在来自 Topcon 和 Optain 相机的 725 对图像上对 GAN 模型进行了训练,并在来自印度 Aravind 眼科医院的另外 843 对图像上对其进行了外部验证。使用视盘分割模型评估了相机之间视盘参数的差异。通过均方根误差(RMSE)、峰值信噪比(PSNR)、结构相似性指数(SSIM)、95%一致性界限(LOA)、Pearson 相关系数和 Cohen Kappa 系数来评估转换图像的性能。评估将 GON 模型在 Topcon 照片上的性能作为参考,与 Optain 照片和 GAN 转换照片的性能进行了比较。
GAN 模型显著降低了 Optain 对 GON 诊断的假阳性结果,其 RMSE、PSNR 和 SSIM 分别为 0.067、14.31 和 0.64,Topcon 和 GAN 图像之间 VCDR 和杯盘面积比的平均差异为 0.03,95% LOA 范围为-0.09 至 0.15 和-0.05 至 0.10。VCDR 和杯盘面积比的 Pearson 相关系数分别从 0.61 增加到 0.85 和从 0.70 增加到 0.89,而 Cohen Kappa 在 GAN 转换后从 0.32 增加到 0.60。
通过使用 GAN 解决 Optain 相机中视盘过曝光的问题,可以实现跨相机的图像到图像转换。
我们的方法增强了深度学习诊断模型的泛化能力,确保它们在原始训练数据集之外的相机上的性能。