School of Computing, Queen's University, Kingston, ON, Canada; Department of Electrical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran.
School of Computing, Queen's University, Kingston, ON, Canada; Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.
Comput Biol Med. 2024 Mar;170:107982. doi: 10.1016/j.compbiomed.2024.107982. Epub 2024 Jan 18.
Accurate brain tumour segmentation is critical for tasks such as surgical planning, diagnosis, and analysis, with magnetic resonance imaging (MRI) being the preferred modality due to its excellent visualisation of brain tissues. However, the wide intensity range of voxel values in MR scans often results in significant overlap between the density distributions of different tumour tissues, leading to reduced contrast and segmentation accuracy. This paper introduces a novel framework based on conditional generative adversarial networks (cGANs) aimed at enhancing the contrast of tumour subregions for both voxel-wise and region-wise segmentation approaches. We present two models: Enhancement and Segmentation GAN (ESGAN), which combines classifier loss with adversarial loss to predict central labels of input patches, and Enhancement GAN (EnhGAN), which generates high-contrast synthetic images with reduced inter-class overlap. These synthetic images are then fused with corresponding modalities to emphasise meaningful tissues while suppressing weaker ones. We also introduce a novel generator that adaptively calibrates voxel values within input patches, leveraging fully convolutional networks. Both models employ a multi-scale Markovian network as a GAN discriminator to capture local patch statistics and estimate the distribution of MR images in complex contexts. Experimental results on publicly available MR brain tumour datasets demonstrate the competitive accuracy of our models compared to current brain tumour segmentation techniques.
准确的脑肿瘤分割对于手术规划、诊断和分析等任务至关重要,磁共振成像 (MRI) 因其对脑组织的出色可视化效果而成为首选模态。然而,MR 扫描中体素值的广泛强度范围通常会导致不同肿瘤组织的密度分布之间存在显著重叠,从而降低对比度和分割准确性。本文提出了一种基于条件生成对抗网络 (cGAN) 的新框架,旨在增强肿瘤亚区的对比度,适用于体素级和区域级分割方法。我们提出了两种模型:增强与分割生成对抗网络 (ESGAN),它结合分类器损失和对抗损失来预测输入补丁的中心标签,以及增强生成对抗网络 (EnhGAN),它生成具有减少类间重叠的高对比度合成图像。然后,这些合成图像与相应的模态融合,以强调有意义的组织,同时抑制较弱的组织。我们还引入了一种新的生成器,它利用全卷积网络自适应校准输入补丁内的体素值。这两个模型都采用多尺度马尔可夫网络作为 GAN 鉴别器,以捕获局部补丁统计信息,并在复杂环境下估计 MR 图像的分布。在公开的脑肿瘤 MRI 数据集上的实验结果表明,与当前的脑肿瘤分割技术相比,我们的模型具有竞争力的准确性。