• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images.基于对抗训练的形状约束全卷积 DenseNet 用于头颈部 CT 和低场 MR 图像多器官分割。
Med Phys. 2019 Jun;46(6):2669-2682. doi: 10.1002/mp.13553. Epub 2019 May 6.
2
Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks.使用基于形状表示模型约束的全卷积神经网络进行头颈部癌症放疗的全自动多器官分割。
Med Phys. 2018 Oct;45(10):4558-4567. doi: 10.1002/mp.13147. Epub 2018 Sep 19.
3
Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.使用卷积神经网络对头颈部CT图像中的危险器官进行分割。
Med Phys. 2017 Feb;44(2):547-557. doi: 10.1002/mp.12045.
4
AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy.AnatomyNet:用于快速和全自动对头颈部解剖结构进行整体体积分割的深度学习方法。
Med Phys. 2019 Feb;46(2):576-589. doi: 10.1002/mp.13300. Epub 2018 Dec 17.
5
Self-channel-and-spatial-attention neural network for automated multi-organ segmentation on head and neck CT images.基于自通道和空间注意力神经网络的头颈部 CT 图像自动多器官分割。
Phys Med Biol. 2020 Dec 11;65(24):245034. doi: 10.1088/1361-6560/ab79c3.
6
Automatic multiorgan segmentation in thorax CT images using U-net-GAN.基于 U-net-GAN 的胸部 CT 图像多器官自动分割。
Med Phys. 2019 May;46(5):2157-2168. doi: 10.1002/mp.13458. Epub 2019 Mar 22.
7
Deep learning-based auto segmentation using generative adversarial network on magnetic resonance images obtained for head and neck cancer patients.基于生成对抗网络的深度学习对头颈部癌症患者磁共振图像的自动分割。
J Appl Clin Med Phys. 2022 May;23(5):e13579. doi: 10.1002/acm2.13579. Epub 2022 Mar 9.
8
Weaving attention U-net: A novel hybrid CNN and attention-based method for organs-at-risk segmentation in head and neck CT images.编织注意力 U 网:一种新型的混合 CNN 和基于注意力的头颈部 CT 图像中危及器官分割方法。
Med Phys. 2021 Nov;48(11):7052-7062. doi: 10.1002/mp.15287. Epub 2021 Oct 26.
9
Head-and-neck organs-at-risk auto-delineation using dual pyramid networks for CBCT-guided adaptive radiotherapy.基于双金字塔网络的头颈部危及器官自动勾画技术在锥形束 CT 引导自适应放疗中的应用
Phys Med Biol. 2021 Feb 11;66(4):045021. doi: 10.1088/1361-6560/abd953.
10
Cascaded deep learning-based auto-segmentation for head and neck cancer patients: Organs at risk on T2-weighted magnetic resonance imaging.基于级联深度学习的头颈部癌症患者自动分割:T2 加权磁共振成像上的危险器官。
Med Phys. 2021 Dec;48(12):7757-7772. doi: 10.1002/mp.15290. Epub 2021 Nov 1.

引用本文的文献

1
Adaptive Radiation Therapy for Head and Neck Cancer.头颈部癌的自适应放射治疗
ArXiv. 2025 Aug 1:arXiv:2508.00651v1.
2
MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration.基于肌肉骨骼感知(MUSA)的深度学习在解剖引导的头颈部 CT 可变形配准中的应用。
Med Image Anal. 2025 Jan;99:103351. doi: 10.1016/j.media.2024.103351. Epub 2024 Sep 21.
3
Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation.迈向更精确的自动分析:基于深度学习的多器官分割的系统评价。
Biomed Eng Online. 2024 Jun 8;23(1):52. doi: 10.1186/s12938-024-01238-8.
4
Parotid Gland Segmentation Using Purely Transformer-Based U-Shaped Network and Multimodal MRI.基于纯Transformer 架构的 U 型网络和多模态 MRI 的腮腺分割。
Ann Biomed Eng. 2024 Aug;52(8):2101-2117. doi: 10.1007/s10439-024-03510-3. Epub 2024 May 1.
5
Clinical Use of a Commercial Artificial Intelligence-Based Software for Autocontouring in Radiation Therapy: Geometric Performance and Dosimetric Impact.一种基于人工智能的商业软件在放射治疗自动轮廓勾画中的临床应用:几何性能和剂量学影响
Cancers (Basel). 2023 Dec 7;15(24):5735. doi: 10.3390/cancers15245735.
6
The Use of MR-Guided Radiation Therapy for Head and Neck Cancer and Recommended Reporting Guidance.MR 引导放疗在头颈部肿瘤中的应用及推荐报告指南。
Semin Radiat Oncol. 2024 Jan;34(1):69-83. doi: 10.1016/j.semradonc.2023.10.003.
7
Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis.深度学习算法在勾画头颈部危及器官中的性能:系统评价和单臂荟萃分析。
Biomed Eng Online. 2023 Nov 1;22(1):104. doi: 10.1186/s12938-023-01159-y.
8
Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO).人工智能在图像引导放射治疗(IGRT)中的应用:意大利放射治疗和临床肿瘤学协会青年组(yAIRO)的系统评价。
Radiol Med. 2024 Jan;129(1):133-151. doi: 10.1007/s11547-023-01708-4. Epub 2023 Sep 23.
9
Deep-learning magnetic resonance imaging-based automatic segmentation for organs-at-risk in the brain: Accuracy and impact on dose distribution.基于深度学习磁共振成像的脑内危及器官自动分割:准确性及对剂量分布的影响
Phys Imaging Radiat Oncol. 2023 Jun 6;27:100454. doi: 10.1016/j.phro.2023.100454. eCollection 2023 Jul.
10
Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review.基于医学图像重建视觉通路中视神经纤维的计算方法:全面综述
Front Neurosci. 2023 May 26;17:1191999. doi: 10.3389/fnins.2023.1191999. eCollection 2023.

本文引用的文献

1
Adaptive radiotherapy for head and neck cancer.头颈部癌症的自适应放疗。
Acta Oncol. 2018 Oct;57(10):1284-1292. doi: 10.1080/0284186X.2018.1505053. Epub 2018 Oct 5.
2
Spine-GAN: Semantic segmentation of multiple spinal structures.脊柱-GAN:多脊柱结构的语义分割。
Med Image Anal. 2018 Dec;50:23-35. doi: 10.1016/j.media.2018.08.005. Epub 2018 Aug 25.
3
Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks.使用基于形状表示模型约束的全卷积神经网络进行头颈部癌症放疗的全自动多器官分割。
Med Phys. 2018 Oct;45(10):4558-4567. doi: 10.1002/mp.13147. Epub 2018 Sep 19.
4
H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.H-DenseUNet:用于 CT 容积的肝脏和肿瘤分割的混合密集连接 UNet。
IEEE Trans Med Imaging. 2018 Dec;37(12):2663-2674. doi: 10.1109/TMI.2018.2845918. Epub 2018 Jun 11.
5
Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets.基于三维全卷积密集网络的 DWI 序列急性缺血性脑卒中自动分割。
IEEE Trans Med Imaging. 2018 Sep;37(9):2149-2160. doi: 10.1109/TMI.2018.2821244. Epub 2018 Mar 30.
6
DRINet for Medical Image Segmentation.DRINet 用于医学图像分割。
IEEE Trans Med Imaging. 2018 Nov;37(11):2453-2462. doi: 10.1109/TMI.2018.2835303. Epub 2018 May 10.
7
Geometric and dosimetric evaluations of atlas-based segmentation methods of MR images in the head and neck region.基于图谱的头部和颈部磁共振图像分割方法的几何和剂量学评估。
Phys Med Biol. 2018 Jul 11;63(14):145007. doi: 10.1088/1361-6560/aacb65.
8
Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning.基于层次顶点回归的头颈部 CT 图像放射治疗计划分割。
IEEE Trans Image Process. 2018 Feb;27(2):923-937. doi: 10.1109/TIP.2017.2768621.
9
The future of image-guided radiotherapy will be MR guided.图像引导放射治疗的未来将是磁共振引导的。
Br J Radiol. 2017 May;90(1073):20160667. doi: 10.1259/bjr.20160667. Epub 2017 Mar 29.
10
Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.使用卷积神经网络对头颈部CT图像中的危险器官进行分割。
Med Phys. 2017 Feb;44(2):547-557. doi: 10.1002/mp.12045.

基于对抗训练的形状约束全卷积 DenseNet 用于头颈部 CT 和低场 MR 图像多器官分割。

Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images.

机构信息

Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.

Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA.

出版信息

Med Phys. 2019 Jun;46(6):2669-2682. doi: 10.1002/mp.13553. Epub 2019 May 6.

DOI:10.1002/mp.13553
PMID:31002188
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6581189/
Abstract

PURPOSE

Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI.

METHODS AND MATERIAL

A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation.

RESULTS

The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU.

CONCLUSION

The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.

摘要

目的

图像引导放疗不仅为患者定位提供图像,还为在线自适应放疗提供图像。对头颈部(H&N)CT 和 MR 图像中危及器官(OAR)的准确勾画对于初始治疗计划和自适应计划都很有价值,但手动勾画既费力又不一致。我们开发了一种基于生成对抗网络(GAN)和形状约束(SC-GAN)的新方法,用于在 CT 和低场 MRI 上对头颈部 OAR 进行全自动分割。

方法与材料

采用深度监督全卷积 DenseNet 作为体素级预测的分割网络。然后,利用基于卷积神经网络(CNN)的鉴别器网络来纠正预测误差和预测与真实值之间的图像级不一致性。在潜在形状空间中,预测与真实值之间的附加形状表示损失被整合到分割和对抗损失函数中,以减少假阳性并约束预测形状。该分割方法首先在包括 32 名患者的公共 H&N CT 数据库上进行了基准测试,然后在 25 个 0.35T 从磁共振引导放疗系统获得的 MR 图像上进行了测试。OAR 包括脑干、视交叉、喉(仅 MR)、下颌骨、咽(仅 MR)、左右腮腺、左右视神经和左右下颌下腺(仅 CT)。将提出的 SC-GAN 的性能与 GAN 本身以及具有形状约束(SC)但没有 DenseNet 的 GAN(SC-GAN-ResNet)进行比较,以量化形状约束和 DenseNet 在深度神经网络分割中的贡献。

结果

与我们之前的深度分割网络相比,所提出的 SC-GAN 略微但一致地提高了基准 H&N CT 图像上的分割准确性,在相同或类似的 CT H&N 数据集上优于其他已发表的方法。在低场 MR 数据集上,使用改进的 SC-GAN 获得了以下平均 Dice 指数:0.916(脑干)、0.589(视交叉)、0.816(下颌骨)、0.703(视神经)、0.799(喉)、0.706(咽)和 0.845(腮腺)。平均表面距离范围为 0.68 毫米(脑干)至 1.70 毫米(喉)。95%表面距离范围为 1.48 毫米(左视神经)至 3.92 毫米(喉)。与 CT 相比,使用 95%表面距离评估,MR 上的自动分割准确性对于脑干、视交叉、视神经和腮腺较高,对于下颌骨较低。与 SC-GAN-ResNet 相比,SC-GAN 的性能更好,在 CT 和 MR 数据集上都比 GAN 本身更准确。使用单个 GPU,每位患者的分割时间为 14 秒。

结论

通过结合 GAN 和 DenseNet,我们之前用于 H&N 分割的形状约束全卷积 CNN 的性能得到了进一步提高。通过新的分割方法,我们表明在磁共振引导放疗系统上获得的低场 MR 图像可以支持自适应放疗中对骨性和软组织 OAR 的精确和全自动分割。