Suppr超能文献

用有限的目标数据学习来检测跨模态图像中的细胞。

Learning with limited target data to detect cells in cross-modality images.

机构信息

Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.

Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.

出版信息

Med Image Anal. 2023 Dec;90:102969. doi: 10.1016/j.media.2023.102969. Epub 2023 Sep 29.

Abstract

Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.

摘要

深度神经网络在显微镜图像中的细胞或细胞核定量方面取得了优异的性能,但在应用于跨模态成像数据时,其性能往往会下降。基于生成对抗网络(GAN)的无监督域自适应(UDA)最近提高了跨模态医学图像定量的性能。然而,当前基于 GAN 的 UDA 方法通常需要大量的目标数据进行模型训练,而这在实际应用中往往非常昂贵,甚至不可能获得。在本文中,我们研究了一种更现实但更具挑战性的 UDA 情况,即(未标记的)目标训练数据有限,而以前的工作很少深入研究细胞识别。我们首先增强了具有特定任务建模的双 GAN,该方法提供了额外的监督信号,以辅助生成器学习。我们探索了单方向和双向的任务增强 GAN 进行域自适应。然后,我们通过引入可区分的随机数据增强模块进一步改进 GAN,以显式减少判别器的过拟合。我们研究了 GAN 增强的源域、目标域和双域数据增强,以及联合任务和数据增强的统一 GAN 基于 UDA 框架。我们评估了该框架在多个公共和内部显微镜图像数据集上的细胞检测性能,这些数据集是使用不同的成像模式、染色方案和/或组织制备方法采集的。实验表明,与参考基线相比,我们的方法在性能上有显著提升,并且优于或与使用真实目标注释进行训练的全监督模型相当。此外,我们的方法在不同的数据集上大大优于最新的最先进的 UDA 方法。

相似文献

2
Low-Resource Adversarial Domain Adaptation for Cross-Modality Nucleus Detection.用于跨模态细胞核检测的低资源对抗域适应
Med Image Comput Comput Assist Interv. 2022 Sep;13437:639-649. doi: 10.1007/978-3-031-16449-1_61. Epub 2022 Sep 17.
9

引用本文的文献

1
Abnormality-aware multimodal learning for WSI classification.用于全切片图像分类的异常感知多模态学习
Front Med (Lausanne). 2025 Feb 25;12:1546452. doi: 10.3389/fmed.2025.1546452. eCollection 2025.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验