School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China.
Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China.
PLoS One. 2024 May 2;19(5):e0298227. doi: 10.1371/journal.pone.0298227. eCollection 2024.
Medical image segmentation is a critical application that plays a significant role in clinical research. Despite the fact that many deep neural networks have achieved quite high accuracy in the field of medical image segmentation, there is still a scarcity of annotated labels, making it difficult to train a robust and generalized model. Few-shot learning has the potential to predict new classes that are unseen in training with a few annotations. In this study, a novel few-shot semantic segmentation framework named prototype-based generative adversarial network (PG-Net) is proposed for medical image segmentation without annotations. The proposed PG-Net consists of two subnetworks: the prototype-based segmentation network (P-Net) and the guided evaluation network (G-Net). On one hand, the P-Net as a generator focuses on extracting multi-scale features and local spatial information in order to produce refined predictions with discriminative context between foreground and background. On the other hand, the G-Net as a discriminator, which employs an attention mechanism, further distills the relation knowledge between support and query, and contributes to P-Net producing segmentation masks of query with more similar distributions as support. Hence, the PG-Net can enhance segmentation quality by an adversarial training strategy. Compared to the state-of-the-art (SOTA) few-shot segmentation methods, comparative experiments demonstrate that the proposed PG-Net provides noticeably more robust and prominent generalization ability on different medical image modality datasets, including an abdominal Computed Tomography (CT) dataset and an abdominal Magnetic Resonance Imaging (MRI) dataset.
医学图像分割是一个关键的应用,在临床研究中起着重要的作用。尽管许多深度神经网络在医学图像分割领域已经取得了相当高的精度,但仍然缺乏标注标签,因此很难训练出一个强大且通用的模型。少样本学习有可能在只有少量标注的情况下预测新的训练中未见过的类别。在这项研究中,提出了一种名为基于原型生成对抗网络(PG-Net)的新型少样本语义分割框架,用于无标注的医学图像分割。所提出的 PG-Net 由两个子网组成:基于原型的分割网络(P-Net)和引导评估网络(G-Net)。一方面,P-Net 作为生成器,专注于提取多尺度特征和局部空间信息,以便在前景和背景之间产生具有判别上下文的精细预测。另一方面,G-Net 作为鉴别器,采用注意力机制,进一步提炼支持和查询之间的关系知识,并有助于 P-Net 生成与支持更相似分布的查询的分割掩模。因此,PG-Net 可以通过对抗训练策略提高分割质量。与最先进的(SOTA)少样本分割方法相比,对比实验表明,所提出的 PG-Net 在不同的医学图像模态数据集上提供了更稳健和突出的泛化能力,包括腹部 CT 数据集和腹部 MRI 数据集。
Int J Comput Assist Radiol Surg. 2020-11
Magn Reson Med. 2018-12-10
IEEE Trans Med Imaging. 2022-7
J Digit Imaging. 2023-8
Int J Comput Assist Radiol Surg. 2023-2
Comput Methods Programs Biomed. 2020-2
IEEE J Biomed Health Inform. 2023-12
Comput Methods Programs Biomed. 2023-12
IEEE Trans Med Imaging. 2023-9
IEEE Trans Neural Netw Learn Syst. 2024-8