Suppr超能文献

基于对抗学习和伪标记的常规眼底图像在超广角眼底诊断模型中的应用。

Leveraging Regular Fundus Images for Training UWF Fundus Diagnosis Models via Adversarial Learning and Pseudo-Labeling.

出版信息

IEEE Trans Med Imaging. 2021 Oct;40(10):2911-2925. doi: 10.1109/TMI.2021.3056395. Epub 2021 Sep 30.

Abstract

Recently, ultra-widefield (UWF) 200° fundus imaging by Optos cameras has gradually been introduced because of its broader insights for detecting more information on the fundus than regular 30° - 60° fundus cameras. Compared with UWF fundus images, regular fundus images contain a large amount of high-quality and well-annotated data. Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly. Hence, given that annotating medical data is labor intensive and time consuming, in this paper, we explore how to leverage regular fundus images to improve the limited UWF fundus data and annotations for more efficient training. We propose the use of a modified cycle generative adversarial network (CycleGAN) model to bridge the gap between regular and UWF fundus and generate additional UWF fundus images for training. A consistency regularization term is proposed in the loss of the GAN to improve and regulate the quality of the generated data. Our method does not require that images from the two domains be paired or even that the semantic labels be the same, which provides great convenience for data collection. Furthermore, we show that our method is robust to noise and errors introduced by the generated unlabeled data with the pseudo-labeling technique. We evaluated the effectiveness of our methods on several common fundus diseases and tasks, such as diabetic retinopathy (DR) classification, lesion detection and tessellated fundus segmentation. The experimental results demonstrate that our proposed method simultaneously achieves superior generalizability of the learned representations and performance improvements in multiple tasks.

摘要

最近,由于 Optos 相机的超广角(UWF)200°眼底成像能够比常规的 30°-60°眼底相机检测到更多眼底信息,因此逐渐得到应用。与 UWF 眼底图像相比,常规眼底图像包含大量高质量且标注良好的数据。由于领域差距,使用常规眼底图像训练的模型在识别 UWF 眼底图像时表现不佳。因此,鉴于标注医学数据既费力又耗时,在本文中,我们探讨了如何利用常规眼底图像来改善有限的 UWF 眼底数据和标注,以实现更高效的训练。我们提出使用修改后的循环生成对抗网络(CycleGAN)模型来弥合常规和 UWF 眼底之间的差距,并生成额外的 UWF 眼底图像进行训练。在 GAN 的损失中提出了一致性正则化项,以提高和调节生成数据的质量。我们的方法不需要两个域的图像配对,甚至不需要语义标签相同,这为数据收集提供了极大的便利。此外,我们通过伪标签技术表明,我们的方法对生成的无标签数据中引入的噪声和误差具有鲁棒性。我们在几种常见的眼底疾病和任务上评估了我们方法的有效性,例如糖尿病视网膜病变(DR)分类、病变检测和镶嵌眼底分割。实验结果表明,我们提出的方法在多个任务中同时实现了学习表示的优异泛化能力和性能提升。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验