Suppr超能文献

基于多方向深度监督 V-Net 的前列腺超声图像分割。

Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net.

机构信息

Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.

出版信息

Med Phys. 2019 Jul;46(7):3194-3206. doi: 10.1002/mp.13577. Epub 2019 May 29.

Abstract

PURPOSE

Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation.

METHODS AND MATERIALS

We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing.

RESULTS

Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively.

CONCLUSION

We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.

摘要

目的

经直肠超声(TRUS)是一种多功能实时成像方式,常用于图像引导下的前列腺癌介入治疗(如活检和近距离放射治疗)。前列腺的准确分割是活检针放置、近距离放射治疗计划和运动管理的关键。在这些介入过程中手动分割既耗时又容易受到观察者间和观察者内的变化的影响。为了解决这些缺点,我们旨在开发一种基于深度学习的方法,该方法将深度监督集成到基于三维(3D)补丁的 V-Net 中,用于前列腺分割。

方法和材料

我们开发了一种基于多维深度学习的方法,用于自动分割超声引导放射治疗的前列腺。在 V-Net 阶段集成了 3D 监督机制,以解决在有限的训练数据下训练深度网络的优化困难。我们将二分类交叉熵(BCE)损失和基于批次的 Dice 损失结合到阶段混合损失函数中,用于深度监督训练。在分割阶段,从新采集的超声图像中提取补丁作为经过良好训练的网络的输入,并且经过良好训练的网络自适应地标记前列腺组织。使用补丁融合重建最终分割的前列腺体积,并通过轮廓细化处理进一步细化。

结果

44 名患者的 TRUS 图像用于测试我们的分割方法。我们的分割结果与手动分割轮廓(金标准)进行了比较。前列腺体积的平均 Dice 相似系数(DSC)、Hausdorff 距离(HD)、平均表面距离(MSD)和残差平均表面距离(RMSD)分别为 0.92±0.03、3.94±1.55、0.60±0.23 和 0.90±0.38mm。

结论

我们开发了一种具有可靠轮廓细化功能的新型深度监督深度学习方法,自动分割 TRUS 前列腺,与手动分割相比,验证了其临床可行性和准确性。该方法可用于前列腺癌的诊断和治疗应用。

相似文献

1
Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net.
Med Phys. 2019 Jul;46(7):3194-3206. doi: 10.1002/mp.13577. Epub 2019 May 29.
2
Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images.
Med Phys. 2020 Jun;47(6):2413-2426. doi: 10.1002/mp.14134. Epub 2020 Apr 8.
4
Deep learning-based ultrasound auto-segmentation of the prostate with brachytherapy implanted needles.
Med Phys. 2024 Apr;51(4):2665-2677. doi: 10.1002/mp.16811. Epub 2023 Oct 27.
8
Fast interactive medical image segmentation with weakly supervised deep learning method.
Int J Comput Assist Radiol Surg. 2020 Sep;15(9):1437-1444. doi: 10.1007/s11548-020-02223-x. Epub 2020 Jul 11.

引用本文的文献

1
AI-assisted anatomical structure recognition and segmentation via mamba-transformer architecture in abdominal ultrasound images.
Front Artif Intell. 2025 Jul 23;8:1618607. doi: 10.3389/frai.2025.1618607. eCollection 2025.
2
U-Net benign prostatic hyperplasia-trained deep learning model for prostate ultrasound image segmentation in prostate cancer.
Quant Imaging Med Surg. 2025 Jun 6;15(6):5424-5435. doi: 10.21037/qims-2024-2476. Epub 2025 May 30.
3
3D segmentation of uterine fibroids based on deep supervision and an attention gate.
Front Oncol. 2025 Mar 13;15:1522399. doi: 10.3389/fonc.2025.1522399. eCollection 2025.
4
A review of artificial intelligence in brachytherapy.
J Appl Clin Med Phys. 2025 Jun;26(6):e70034. doi: 10.1002/acm2.70034. Epub 2025 Feb 27.
5
Instant plan quality prediction on transrectal ultrasound for high-dose-rate prostate brachytherapy.
Brachytherapy. 2025 Jan-Feb;24(1):171-176. doi: 10.1016/j.brachy.2024.10.009. Epub 2024 Nov 20.
6
A Review of Artificial Intelligence in Brachytherapy.
ArXiv. 2024 Sep 25:arXiv:2409.16543v1.
7
3D EAGAN: 3D edge-aware attention generative adversarial network for prostate segmentation in transrectal ultrasound images.
Quant Imaging Med Surg. 2024 Jun 1;14(6):4067-4085. doi: 10.21037/qims-23-1698. Epub 2024 May 24.
8
Hippocampus substructure segmentation using morphological vision transformer learning.
Phys Med Biol. 2023 Dec 1;68(23):235013. doi: 10.1088/1361-6560/ad0d45.
9
Intelligent contour extraction approach for accurate segmentation of medical ultrasound images.
Front Physiol. 2023 Aug 22;14:1177351. doi: 10.3389/fphys.2023.1177351. eCollection 2023.
10
Adaptive Region-Specific Loss for Improved Medical Image Segmentation.
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13408-13421. doi: 10.1109/TPAMI.2023.3289667. Epub 2023 Oct 4.

本文引用的文献

1
3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework.
Proc SPIE Int Soc Opt Eng. 2016 Feb-Mar;9784. doi: 10.1117/12.2216396. Epub 2016 Mar 21.
3
Automatic multiorgan segmentation in thorax CT images using U-net-GAN.
Med Phys. 2019 May;46(5):2157-2168. doi: 10.1002/mp.13458. Epub 2019 Mar 22.
6
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.
Med Image Anal. 2018 Aug;48:107-116. doi: 10.1016/j.media.2018.05.010. Epub 2018 Jun 1.
7
Prostate segmentation in transrectal ultrasound using magnetic resonance imaging priors.
Int J Comput Assist Radiol Surg. 2018 Jun;13(6):749-757. doi: 10.1007/s11548-018-1742-6. Epub 2018 Mar 27.
8
3D deeply supervised network for automated segmentation of volumetric medical images.
Med Image Anal. 2017 Oct;41:40-54. doi: 10.1016/j.media.2017.05.001. Epub 2017 May 8.
10
A Review of Imaging Methods for Prostate Cancer Detection.
Biomed Eng Comput Biol. 2016 Mar 2;7(Suppl 1):1-15. doi: 10.4137/BECB.S34255. eCollection 2016.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验