Nguyen Dan, Balagopal Anjali, Bai Ti, Dohopolski Michael, Lin Mu-Han, Jiang Steve
Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America.
Mach Learn Sci Technol. 2025 Jun 30;6(2):025016. doi: 10.1088/2632-2153/adc970. Epub 2025 Apr 16.
Radiotherapy treatment planning requires segmenting anatomical structures in various styles, influenced by guidelines, protocols, preferences, or dose planning needs. Deep learning-based auto-segmentation models, trained on anatomical definitions, may not match local clinicians' styles at new institutions. Adapting these models can be challenging without sufficient resources. We hypothesize that consistent differences between segmentation styles and anatomical definitions can be learned from initial patients and applied to pre-trained models for more precise segmentation. We propose a Prior-guided deep difference meta-learner (DDL) to learn and adapt these differences. We collected data from 440 patients for model development and 30 for testing. The dataset includes contours of the prostate clinical target volume (CTV), parotid, and rectum. We developed a deep learning framework that segments new images with a matching style using example styles as a prior, without model retraining. The pre-trained segmentation models were adapted to three different clinician styles for post-operative CTV for prostate, parotid gland, and rectum segmentation. We tested the model's ability to learn unseen styles and compared its performance with transfer learning, using varying amounts of prior patient style data (0-10 patients). Performance was quantitatively evaluated using dice similarity coefficient (DSC) and Hausdorff distance. With exposure to only three patients for the model, the average DSC (%) improved from 78.6, 71.9, 63.0, 69.6, 52.2 and 46.3-84.4, 77.8, 73.0, 77.8, 70.5, 68.1, for CTV, CTV, CTV, Parotid, Rectum, and Rectum, respectively. The proposed Prior-guided DDL is a fast and effortless network for adapting a structure to new styles. The improved segmentation accuracy may result in reduced contour editing time, providing a more efficient and streamlined clinical workflow.
放射治疗计划需要根据指南、方案、偏好或剂量规划需求,以各种方式对解剖结构进行分割。基于深度学习的自动分割模型是根据解剖学定义进行训练的,在新机构中可能与当地临床医生的分割方式不匹配。在没有足够资源的情况下,调整这些模型可能具有挑战性。我们假设可以从初始患者中学习分割方式与解剖学定义之间的一致差异,并将其应用于预训练模型以实现更精确的分割。我们提出了一种先验引导的深度差异元学习器(DDL)来学习和适应这些差异。我们收集了440名患者的数据用于模型开发,30名患者的数据用于测试。数据集包括前列腺临床靶区(CTV)、腮腺和直肠的轮廓。我们开发了一个深度学习框架,该框架以示例分割方式为先验,在不重新训练模型的情况下,以匹配的分割方式对新图像进行分割。将预训练的分割模型应用于三种不同的临床医生分割方式,用于前列腺、腮腺和直肠术后CTV的分割。我们测试了该模型学习未见分割方式的能力,并使用不同数量的先前患者分割方式数据(0至10名患者)将其性能与迁移学习进行比较。使用骰子相似系数(DSC)和豪斯多夫距离对性能进行定量评估。仅让模型接触三名患者后,CTV、CTV、CTV、腮腺、直肠和直肠的平均DSC(%)分别从78.6、71.9、63.0、69.6、52.2和46.3提高到84.4、77.8、73.0、77.8、70.5和68.1。所提出的先验引导DDL是一个快速且简便的网络,可使结构适应新的分割方式。分割精度的提高可能会减少轮廓编辑时间,从而提供更高效、更简化的临床工作流程。
Mach Learn Sci Technol. 2025-6-30
Phys Imaging Radiat Oncol. 2023-10-10
Bioengineering (Basel). 2023-1-14
Comput Biol Med. 2022-4
IEEE Trans Biomed Eng. 2022-3
J Imaging. 2021-2-10
Med Image Comput Comput Assist Interv. 2020-10