Suppr超能文献

基于对抗训练的形状约束全卷积 DenseNet 用于头颈部 CT 和低场 MR 图像多器官分割。

Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images.

机构信息

Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.

Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA.

出版信息

Med Phys. 2019 Jun;46(6):2669-2682. doi: 10.1002/mp.13553. Epub 2019 May 6.

Abstract

PURPOSE

Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI.

METHODS AND MATERIAL

A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation.

RESULTS

The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU.

CONCLUSION

The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.

摘要

目的

图像引导放疗不仅为患者定位提供图像,还为在线自适应放疗提供图像。对头颈部(H&N)CT 和 MR 图像中危及器官(OAR)的准确勾画对于初始治疗计划和自适应计划都很有价值,但手动勾画既费力又不一致。我们开发了一种基于生成对抗网络(GAN)和形状约束(SC-GAN)的新方法,用于在 CT 和低场 MRI 上对头颈部 OAR 进行全自动分割。

方法与材料

采用深度监督全卷积 DenseNet 作为体素级预测的分割网络。然后,利用基于卷积神经网络(CNN)的鉴别器网络来纠正预测误差和预测与真实值之间的图像级不一致性。在潜在形状空间中,预测与真实值之间的附加形状表示损失被整合到分割和对抗损失函数中,以减少假阳性并约束预测形状。该分割方法首先在包括 32 名患者的公共 H&N CT 数据库上进行了基准测试,然后在 25 个 0.35T 从磁共振引导放疗系统获得的 MR 图像上进行了测试。OAR 包括脑干、视交叉、喉(仅 MR)、下颌骨、咽(仅 MR)、左右腮腺、左右视神经和左右下颌下腺(仅 CT)。将提出的 SC-GAN 的性能与 GAN 本身以及具有形状约束(SC)但没有 DenseNet 的 GAN(SC-GAN-ResNet)进行比较,以量化形状约束和 DenseNet 在深度神经网络分割中的贡献。

结果

与我们之前的深度分割网络相比,所提出的 SC-GAN 略微但一致地提高了基准 H&N CT 图像上的分割准确性,在相同或类似的 CT H&N 数据集上优于其他已发表的方法。在低场 MR 数据集上,使用改进的 SC-GAN 获得了以下平均 Dice 指数:0.916(脑干)、0.589(视交叉)、0.816(下颌骨)、0.703(视神经)、0.799(喉)、0.706(咽)和 0.845(腮腺)。平均表面距离范围为 0.68 毫米(脑干)至 1.70 毫米(喉)。95%表面距离范围为 1.48 毫米(左视神经)至 3.92 毫米(喉)。与 CT 相比,使用 95%表面距离评估,MR 上的自动分割准确性对于脑干、视交叉、视神经和腮腺较高,对于下颌骨较低。与 SC-GAN-ResNet 相比,SC-GAN 的性能更好,在 CT 和 MR 数据集上都比 GAN 本身更准确。使用单个 GPU,每位患者的分割时间为 14 秒。

结论

通过结合 GAN 和 DenseNet,我们之前用于 H&N 分割的形状约束全卷积 CNN 的性能得到了进一步提高。通过新的分割方法,我们表明在磁共振引导放疗系统上获得的低场 MR 图像可以支持自适应放疗中对骨性和软组织 OAR 的精确和全自动分割。

相似文献

引用本文的文献

本文引用的文献

1
Adaptive radiotherapy for head and neck cancer.头颈部癌症的自适应放疗。
Acta Oncol. 2018 Oct;57(10):1284-1292. doi: 10.1080/0284186X.2018.1505053. Epub 2018 Oct 5.
2
Spine-GAN: Semantic segmentation of multiple spinal structures.脊柱-GAN:多脊柱结构的语义分割。
Med Image Anal. 2018 Dec;50:23-35. doi: 10.1016/j.media.2018.08.005. Epub 2018 Aug 25.
6
DRINet for Medical Image Segmentation.DRINet 用于医学图像分割。
IEEE Trans Med Imaging. 2018 Nov;37(11):2453-2462. doi: 10.1109/TMI.2018.2835303. Epub 2018 May 10.
9
The future of image-guided radiotherapy will be MR guided.图像引导放射治疗的未来将是磁共振引导的。
Br J Radiol. 2017 May;90(1073):20160667. doi: 10.1259/bjr.20160667. Epub 2017 Mar 29.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验