Suppr超能文献

通过图像到图像的翻译弥合相机域差距可提高青光眼诊断。

Bridging the Camera Domain Gap With Image-to-Image Translation Improves Glaucoma Diagnosis.

机构信息

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China.

Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.

出版信息

Transl Vis Sci Technol. 2023 Dec 1;12(12):20. doi: 10.1167/tvst.12.12.20.

Abstract

PURPOSE

The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images.

METHODS

We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs.

RESULTS

The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from -0.09 to 0.15 and -0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation.

CONCLUSIONS

Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras.

TRANSLATIONAL RELEVANCE

Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.

摘要

目的

本研究旨在提高青光眼视神经病变(GON)的自动诊断水平,为此我们提出了一种生成对抗网络(GAN)模型,可将 Optain 图像转换为 Topcon 图像。

方法

我们在来自 Topcon 和 Optain 相机的 725 对图像上对 GAN 模型进行了训练,并在来自印度 Aravind 眼科医院的另外 843 对图像上对其进行了外部验证。使用视盘分割模型评估了相机之间视盘参数的差异。通过均方根误差(RMSE)、峰值信噪比(PSNR)、结构相似性指数(SSIM)、95%一致性界限(LOA)、Pearson 相关系数和 Cohen Kappa 系数来评估转换图像的性能。评估将 GON 模型在 Topcon 照片上的性能作为参考,与 Optain 照片和 GAN 转换照片的性能进行了比较。

结果

GAN 模型显著降低了 Optain 对 GON 诊断的假阳性结果,其 RMSE、PSNR 和 SSIM 分别为 0.067、14.31 和 0.64,Topcon 和 GAN 图像之间 VCDR 和杯盘面积比的平均差异为 0.03,95% LOA 范围为-0.09 至 0.15 和-0.05 至 0.10。VCDR 和杯盘面积比的 Pearson 相关系数分别从 0.61 增加到 0.85 和从 0.70 增加到 0.89,而 Cohen Kappa 在 GAN 转换后从 0.32 增加到 0.60。

结论

通过使用 GAN 解决 Optain 相机中视盘过曝光的问题,可以实现跨相机的图像到图像转换。

翻译相关性

我们的方法增强了深度学习诊断模型的泛化能力,确保它们在原始训练数据集之外的相机上的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d6c0/10746931/e72e18519ab2/tvst-12-12-20-f001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验