文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

用于快速适应风格化分割的先验引导深度差异元学习器。

Prior guided deep difference meta-learner for fast adaptation to stylized segmentation.

作者信息

Nguyen Dan, Balagopal Anjali, Bai Ti, Dohopolski Michael, Lin Mu-Han, Jiang Steve

机构信息

Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America.

出版信息

Mach Learn Sci Technol. 2025 Jun 30;6(2):025016. doi: 10.1088/2632-2153/adc970. Epub 2025 Apr 16.


DOI:10.1088/2632-2153/adc970
PMID:40247921
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12001319/
Abstract

Radiotherapy treatment planning requires segmenting anatomical structures in various styles, influenced by guidelines, protocols, preferences, or dose planning needs. Deep learning-based auto-segmentation models, trained on anatomical definitions, may not match local clinicians' styles at new institutions. Adapting these models can be challenging without sufficient resources. We hypothesize that consistent differences between segmentation styles and anatomical definitions can be learned from initial patients and applied to pre-trained models for more precise segmentation. We propose a Prior-guided deep difference meta-learner (DDL) to learn and adapt these differences. We collected data from 440 patients for model development and 30 for testing. The dataset includes contours of the prostate clinical target volume (CTV), parotid, and rectum. We developed a deep learning framework that segments new images with a matching style using example styles as a prior, without model retraining. The pre-trained segmentation models were adapted to three different clinician styles for post-operative CTV for prostate, parotid gland, and rectum segmentation. We tested the model's ability to learn unseen styles and compared its performance with transfer learning, using varying amounts of prior patient style data (0-10 patients). Performance was quantitatively evaluated using dice similarity coefficient (DSC) and Hausdorff distance. With exposure to only three patients for the model, the average DSC (%) improved from 78.6, 71.9, 63.0, 69.6, 52.2 and 46.3-84.4, 77.8, 73.0, 77.8, 70.5, 68.1, for CTV, CTV, CTV, Parotid, Rectum, and Rectum, respectively. The proposed Prior-guided DDL is a fast and effortless network for adapting a structure to new styles. The improved segmentation accuracy may result in reduced contour editing time, providing a more efficient and streamlined clinical workflow.

摘要

放射治疗计划需要根据指南、方案、偏好或剂量规划需求,以各种方式对解剖结构进行分割。基于深度学习的自动分割模型是根据解剖学定义进行训练的,在新机构中可能与当地临床医生的分割方式不匹配。在没有足够资源的情况下,调整这些模型可能具有挑战性。我们假设可以从初始患者中学习分割方式与解剖学定义之间的一致差异,并将其应用于预训练模型以实现更精确的分割。我们提出了一种先验引导的深度差异元学习器(DDL)来学习和适应这些差异。我们收集了440名患者的数据用于模型开发,30名患者的数据用于测试。数据集包括前列腺临床靶区(CTV)、腮腺和直肠的轮廓。我们开发了一个深度学习框架,该框架以示例分割方式为先验,在不重新训练模型的情况下,以匹配的分割方式对新图像进行分割。将预训练的分割模型应用于三种不同的临床医生分割方式,用于前列腺、腮腺和直肠术后CTV的分割。我们测试了该模型学习未见分割方式的能力,并使用不同数量的先前患者分割方式数据(0至10名患者)将其性能与迁移学习进行比较。使用骰子相似系数(DSC)和豪斯多夫距离对性能进行定量评估。仅让模型接触三名患者后,CTV、CTV、CTV、腮腺、直肠和直肠的平均DSC(%)分别从78.6、71.9、63.0、69.6、52.2和46.3提高到84.4、77.8、73.0、77.8、70.5和68.1。所提出的先验引导DDL是一个快速且简便的网络,可使结构适应新的分割方式。分割精度的提高可能会减少轮廓编辑时间,从而提供更高效、更简化的临床工作流程。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/48fe480ec631/mlstadc970f6_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/1c042f0e89de/mlstadc970f1_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/93095afa0e6f/mlstadc970f2_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/e48edb1966c5/mlstadc970f3_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/f5ef7dbc55ea/mlstadc970f4_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/9000f8facc1c/mlstadc970f5_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/48fe480ec631/mlstadc970f6_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/1c042f0e89de/mlstadc970f1_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/93095afa0e6f/mlstadc970f2_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/e48edb1966c5/mlstadc970f3_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/f5ef7dbc55ea/mlstadc970f4_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/9000f8facc1c/mlstadc970f5_hr.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e53/12001319/48fe480ec631/mlstadc970f6_hr.jpg

相似文献

[1]
Prior guided deep difference meta-learner for fast adaptation to stylized segmentation.

Mach Learn Sci Technol. 2025-6-30

[2]
PSA-Net: Deep learning-based physician style-aware segmentation network for postoperative prostate cancer clinical target volumes.

Artif Intell Med. 2021-11

[3]
Patient-specific transfer learning for auto-segmentation in adaptive 0.35 T MRgRT of prostate cancer: a bi-centric evaluation.

Med Phys. 2023-3

[4]
A deep learning-based framework for segmenting invisible clinical target volumes with estimated uncertainties for post-operative prostate cancer radiotherapy.

Med Image Anal. 2021-8

[5]
Evaluating the clinical acceptability of deep learning contours of prostate and organs-at-risk in an automated prostate treatment planning process.

Med Phys. 2022-4

[6]
Prior information guided auto-segmentation of clinical target volume of tumor bed in postoperative breast cancer radiotherapy.

Radiat Oncol. 2023-10-15

[7]
Prior knowledge based deep learning auto-segmentation in magnetic resonance imaging-guided radiotherapy of prostate cancer.

Phys Imaging Radiat Oncol. 2023-10-10

[8]
Custom-Trained Deep Learning-Based Auto-Segmentation for Male Pelvic Iterative CBCT on C-Arm Linear Accelerators.

Pract Radiat Oncol. 2024

[9]
Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography.

Med Phys. 2022-11

[10]
Point-cloud segmentation with in-silico data augmentation for prostate cancer treatment.

Med Phys. 2025-4-3

本文引用的文献

[1]
U-Net Model with Transfer Learning Model as a Backbone for Segmentation of Gastrointestinal Tract.

Bioengineering (Basel). 2023-1-14

[2]
Patient-specific transfer learning for auto-segmentation in adaptive 0.35 T MRgRT of prostate cancer: a bi-centric evaluation.

Med Phys. 2023-3

[3]
Meta-learning with implicit gradients in a few-shot setting for medical image segmentation.

Comput Biol Med. 2022-4

[4]
PSA-Net: Deep learning-based physician style-aware segmentation network for postoperative prostate cancer clinical target volumes.

Artif Intell Med. 2021-11

[5]
Domain Adaptation for Medical Image Analysis: A Survey.

IEEE Trans Biomed Eng. 2022-3

[6]
Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images.

Comput Methods Programs Biomed. 2021-10

[7]
Domain Adaptation for Medical Image Segmentation: A Meta-Learning Method.

J Imaging. 2021-2-10

[8]
Asymmetrical Multi-task Attention U-Net for the Segmentation of Prostate Bed in CT Image.

Med Image Comput Comput Assist Interv. 2020-10

[9]
A deep learning-based framework for segmenting invisible clinical target volumes with estimated uncertainties for post-operative prostate cancer radiotherapy.

Med Image Anal. 2021-8

[10]
Interobserver variability in clinical target volume delineation in anal squamous cell carcinoma.

Sci Rep. 2021-2-2

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索