• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于增强低质量视网膜图像的最优传输引导无监督学习

OPTIMAL TRANSPORT GUIDED UNSUPERVISED LEARNING FOR ENHANCING LOW-QUALITY RETINAL IMAGES.

作者信息

Zhu Wenhui, Qiu Peijie, Farazi Mohammad, Nandakumar Keshav, Dumitrascu Oana M, Wang Yalin

机构信息

School of Computing and Augmented Intelligence, Arizona State University, AZ 85281, USA.

McKeley School of Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA.

出版信息

Proc IEEE Int Symp Biomed Imaging. 2023 Apr;2023. doi: 10.1109/isbi53787.2023.10230719. Epub 2023 Sep 1.

DOI:10.1109/isbi53787.2023.10230719
PMID:37736573
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10513403/
Abstract

Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections, and low-quality when certain ocular or systemic co-morbidities exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses. In this paper, we proposed a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images. Leveraging the optimal transport theory, we proposed an unpaired image-to-image translation scheme for transporting low-quality images to their high-quality counterparts. We theoretically proved that a Generative Adversarial Networks (GAN) model with a generator and discriminator is sufficient for this task. Furthermore, to mitigate the inconsistency of information between the low-quality images and their enhancements, an information consistency mechanism was proposed to maximally maintain structural consistency (optical discs, blood vessels, lesions) between the source and enhanced domains. Extensive experiments were conducted on the EyeQ dataset to demonstrate the superiority of our proposed method perceptually and quantitatively.

摘要

在存在某些眼部或全身合并症的情况下,现实世界中的免散瞳视网膜眼底摄影容易出现伪像、瑕疵和低质量问题。伪像可能导致临床诊断的不准确或模糊。在本文中,我们提出了一个简单但有效的端到端框架,用于增强质量较差的视网膜眼底图像。利用最优传输理论,我们提出了一种非配对图像到图像的转换方案,用于将低质量图像转换为高质量图像。我们从理论上证明,具有生成器和判别器的生成对抗网络(GAN)模型足以完成此任务。此外,为了减轻低质量图像与其增强图像之间信息的不一致性,我们提出了一种信息一致性机制,以最大程度地保持源域和增强域之间的结构一致性(视盘、血管、病变)。我们在EyeQ数据集上进行了广泛的实验,以从感知和定量方面证明我们提出的方法的优越性。

相似文献

1
OPTIMAL TRANSPORT GUIDED UNSUPERVISED LEARNING FOR ENHANCING LOW-QUALITY RETINAL IMAGES.用于增强低质量视网膜图像的最优传输引导无监督学习
Proc IEEE Int Symp Biomed Imaging. 2023 Apr;2023. doi: 10.1109/isbi53787.2023.10230719. Epub 2023 Sep 1.
2
OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing.OTRE:最优传输引导的无配对图像到图像翻译与通过增强实现正则化的结合之处。
Inf Process Med Imaging. 2023 Jun;13939:415-427. doi: 10.1007/978-3-031-34048-2_32. Epub 2023 Jun 8.
3
Joint conditional generative adversarial networks for eyelash artifact removal in ultra-wide-field fundus images.用于去除超广角眼底图像中睫毛伪影的联合条件生成对抗网络。
Front Cell Dev Biol. 2023 May 5;11:1181305. doi: 10.3389/fcell.2023.1181305. eCollection 2023.
4
Unsupervised arterial spin labeling image superresolution via multiscale generative adversarial network.基于多尺度生成对抗网络的无监督动脉自旋标记图像超分辨率。
Med Phys. 2022 Apr;49(4):2373-2385. doi: 10.1002/mp.15468. Epub 2022 Mar 7.
5
Lesion-aware generative adversarial networks for color fundus image to fundus fluorescein angiography translation.用于彩色眼底图像到眼底荧光血管造影转换的病变感知生成对抗网络。
Comput Methods Programs Biomed. 2023 Feb;229:107306. doi: 10.1016/j.cmpb.2022.107306. Epub 2022 Dec 14.
6
AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks.AttentionGAN:基于注意力引导生成对抗网络的非配对图像到图像翻译。
IEEE Trans Neural Netw Learn Syst. 2023 Apr;34(4):1972-1987. doi: 10.1109/TNNLS.2021.3105725. Epub 2023 Apr 4.
7
Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks.深度学习可通过生成对抗网络,利用超广角图像生成传统的眼底照片。
Comput Methods Programs Biomed. 2020 Dec;197:105761. doi: 10.1016/j.cmpb.2020.105761. Epub 2020 Sep 16.
8
Retinal fundus image superresolution generated by optical coherence tomography based on a realistic mixed attention GAN.基于逼真混合注意力生成对抗网络的光学相干断层扫描生成的视网膜眼底图像超分辨率
Med Phys. 2022 May;49(5):3185-3198. doi: 10.1002/mp.15580. Epub 2022 Mar 30.
9
Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis.基于迁移学习的配对-非配对无监督注意力引导生成对抗网络用于双向脑 MRI-CT 合成。
Comput Biol Med. 2021 Sep;136:104763. doi: 10.1016/j.compbiomed.2021.104763. Epub 2021 Aug 18.
10
Progressively Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation.具有自适应层实例归一化的渐进式无监督生成注意力网络用于图像到图像的翻译
Sensors (Basel). 2023 Aug 1;23(15):6858. doi: 10.3390/s23156858.

引用本文的文献

1
CUNSB-RFIE: Context-aware Unpaired Neural Schrödinger Bridge in Retinal Fundus Image Enhancement.CUNSB-RFIE:视网膜眼底图像增强中的上下文感知非配对神经薛定谔桥
IEEE Winter Conf Appl Comput Vis. 2025 Feb-Mar;2025:4502-4511. doi: 10.1109/wacv61041.2025.00442. Epub 2025 Apr 8.
2
TPOT: TOPOLOGY PRESERVING OPTIMAL TRANSPORT IN RETINAL FUNDUS IMAGE ENHANCEMENT.TPOT:视网膜眼底图像增强中的拓扑保持最优传输
Proc IEEE Int Symp Biomed Imaging. 2025 Apr;2025. doi: 10.1109/isbi60581.2025.10981104. Epub 2025 May 12.
3
Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement.用于视网膜眼底图像增强的上下文感知最优传输学习
IEEE Winter Conf Appl Comput Vis. 2025 Feb-Mar;2025:4016-4025. doi: 10.1109/wacv61041.2025.00395. Epub 2025 Apr 8.
4
nnMobileNet: Rethinking CNN for Retinopathy Research.nnMobileNet:重新思考用于视网膜病变研究的卷积神经网络
Conf Comput Vis Pattern Recognit Workshops. 2024 Jun;2024:2285-2294. doi: 10.1109/CVPRW63382.2024.00234. Epub 2024 Sep 27.
5
RBAD: A Dataset and Benchmark for Retinal Vessels Branching Angle Detection.RBAD:用于视网膜血管分支角度检测的数据集与基准
IEEE EMBS Int Conf Biomed Health Inform. 2024 Nov;2024. doi: 10.1109/bhi62660.2024.10913865.
6
Color Fundus Photography and Deep Learning Applications in Alzheimer Disease.彩色眼底摄影及深度学习在阿尔茨海默病中的应用
Mayo Clin Proc Digit Health. 2024 Dec;2(4):548-558. doi: 10.1016/j.mcpdig.2024.08.005. Epub 2024 Aug 26.
7
RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING.从通过无监督学习增强的3T功能磁共振成像数据重建视网膜视觉图像
Proc IEEE Int Symp Biomed Imaging. 2024 May;2024. doi: 10.1109/isbi56570.2024.10635641. Epub 2024 Aug 22.
8
Robust PCA with and Norms: A Novel Method for Low-Quality Retinal Image Enhancement.具有L1和L2范数的稳健主成分分析:一种用于低质量视网膜图像增强的新方法。
J Imaging. 2024 Jun 21;10(7):151. doi: 10.3390/jimaging10070151.
9
ESDiff: a joint model for low-quality retinal image enhancement and vessel segmentation using a diffusion model.ESDiff:一种使用扩散模型进行低质量视网膜图像增强和血管分割的联合模型。
Biomed Opt Express. 2023 Nov 29;14(12):6563-6578. doi: 10.1364/BOE.506205. eCollection 2023 Dec 1.

本文引用的文献

1
Self-Supervised Equivariant Regularization Reconciles Multiple Instance Learning: Joint Referable Diabetic Retinopathy Classification and Lesion Segmentation.自监督等变正则化协调多实例学习:联合可参考糖尿病视网膜病变分类与病变分割
Proc SPIE Int Soc Opt Eng. 2022 Nov;12567. doi: 10.1117/12.2669772. Epub 2023 Mar 6.
2
A deep learning model for detection of Alzheimer's disease based on retinal photographs: a retrospective, multicentre case-control study.基于视网膜照片的阿尔茨海默病检测深度学习模型:一项回顾性、多中心病例对照研究。
Lancet Digit Health. 2022 Nov;4(11):e806-e815. doi: 10.1016/S2589-7500(22)00169-8. Epub 2022 Sep 30.
3
Optimal Transport for Unsupervised Denoising Learning.用于无监督去噪学习的最优传输
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2104-2118. doi: 10.1109/TPAMI.2022.3170155. Epub 2023 Jan 6.
4
Modeling and Enhancing Low-Quality Retinal Fundus Images.眼底低质量图像的建模与增强。
IEEE Trans Med Imaging. 2021 Mar;40(3):996-1006. doi: 10.1109/TMI.2020.3043495. Epub 2021 Mar 2.
5
On the mathematical properties of the structural similarity index.结构相似性指数的数学性质。
IEEE Trans Image Process. 2012 Apr;21(4):1488-99. doi: 10.1109/TIP.2011.2173206. Epub 2011 Oct 24.
6
Image quality assessment: from error visibility to structural similarity.图像质量评估:从误差可见性到结构相似性。
IEEE Trans Image Process. 2004 Apr;13(4):600-12. doi: 10.1109/tip.2003.819861.