School of Software Engineering, Xi'an Jiaotong University, Xi'an, China.
Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China.
Med Phys. 2023 Aug;50(8):5030-5044. doi: 10.1002/mp.16280. Epub 2023 Feb 10.
Accurate segmentation of organs has a great significance for clinical diagnosis, but it is still hard work due to the obscure imaging boundaries caused by tissue adhesion on medical images. Based on the image continuity in medical image volumes, segmentation on these slices could be inferred from adjacent slices with a clear organ boundary. Radiologists can delineate a clear organ boundary by observing adjacent slices.
Inspired by the radiologists' delineating procedure, we design an organ segmentation model based on boundary information of adjacent slices and a human-machine interactive learning strategy to introduce clinical experience.
We propose an interactive organ segmentation method for medical image volume based on Graph Convolution Network (GCN) called Surface-GCN. First, we propose a Surface Feature Extraction Network (SFE-Net) to capture surface features of a target organ, and supervise it by a Mini-batch Adaptive Surface Matching (MBASM) module. Then, to predict organ boundaries precisely, we design an automatic segmentation module based on a Surface Convolution Unit (SCU), which propagates information on organ surfaces to refine the generated boundaries. In addition, an interactive segmentation module is proposed to learn radiologists' experience of interactive corrections on organ surfaces to reduce interaction clicks.
We evaluate the proposed method on one prostate MR image dataset and two abdominal multi-organ CT datasets. The experimental results show that our method outperforms other state-of-the-art methods. For prostate segmentation, the proposed method conducts a DSC score of 94.49% on PROMISE12 test dataset. For abdominal multi-organ segmentation, the proposed method achieves DSC scores of 95, 91, 95, and 88% for the left kidney, gallbladder, spleen, and esophagus, respectively. For interactive segmentation, the proposed method reduces 5-10 interaction clicks to reach the same accuracy.
To overcome the medical organ segmentation challenge, we propose a Graph Convolutional Network called Surface-GCN by imitating radiologist interactions and learning clinical experience. On single and multiple organ segmentation tasks, the proposed method could obtain more accurate segmentation boundaries compared with other state-of-the-art methods.
器官的精确分割对于临床诊断具有重要意义,但由于医学图像中组织粘连导致的成像边界模糊,这仍然是一项艰巨的工作。基于医学图像体积中的图像连续性,可以从具有清晰器官边界的相邻切片推断这些切片上的分割。放射科医生可以通过观察相邻切片来描绘清晰的器官边界。
受放射科医生描绘过程的启发,我们设计了一种基于相邻切片边界信息和人机交互学习策略的器官分割模型,以引入临床经验。
我们提出了一种基于图卷积网络(GCN)的医学图像体积交互式器官分割方法,称为 Surface-GCN。首先,我们提出了一种表面特征提取网络(SFE-Net)来捕获目标器官的表面特征,并通过 Mini-batch Adaptive Surface Matching(MBASM)模块对其进行监督。然后,为了准确预测器官边界,我们设计了一个基于表面卷积单元(SCU)的自动分割模块,该模块传播器官表面的信息以细化生成的边界。此外,还提出了一个交互式分割模块,用于学习放射科医生在器官表面进行交互式校正的经验,以减少交互点击次数。
我们在一个前列腺磁共振图像数据集和两个腹部多器官 CT 数据集上评估了所提出的方法。实验结果表明,我们的方法优于其他最先进的方法。对于前列腺分割,所提出的方法在 PROMISE12 测试数据集上的 DSC 得分为 94.49%。对于腹部多器官分割,所提出的方法在左肾、胆囊、脾脏和食管的 DSC 得分分别为 95、91、95 和 88%。对于交互式分割,所提出的方法可以减少 5-10 次交互点击以达到相同的准确性。
为了克服医学器官分割的挑战,我们通过模仿放射科医生的交互并学习临床经验,提出了一种称为 Surface-GCN 的图卷积网络。在所提出的方法在单器官和多器官分割任务中,与其他最先进的方法相比,所提出的方法可以获得更准确的分割边界。