Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States.
Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States.
Med Image Anal. 2021 Aug;72:102101. doi: 10.1016/j.media.2021.102101. Epub 2021 May 17.
In post-operative radiotherapy for prostate cancer, precisely contouring the clinical target volume (CTV) to be irradiated is challenging, because the cancerous prostate gland has been surgically removed, so the CTV encompasses the microscopic spread of tumor cells, which cannot be visualized in clinical images like computed tomography or magnetic resonance imaging. In current clinical practice, physicians' segment CTVs manually based on their relationship with nearby organs and other clinical information, but this allows large inter-physician variability. Automating post-operative prostate CTV segmentation with traditional image segmentation methods has yielded suboptimal results. We propose using deep learning to accurately segment post-operative prostate CTVs. The model proposed is trained using labels that were clinically approved and used for patient treatment. To segment the CTV, we segment nearby organs first, then use their relationship with the CTV to assist CTV segmentation. To ease the encoding of distance-based features, which are important for learning both the CTV contours' overlap with the surrounding OARs and the distance from their borders, we add distance prediction as an auxiliary task to the CTV network. To make the DL model practical for clinical use, we use Monte Carlo dropout (MCDO) to estimate model uncertainty. Using MCDO, we estimate and visualize the 95% upper and lower confidence bounds for each prediction which informs the physicians of areas that might require correction. The model proposed achieves an average Dice similarity coefficient (DSC) of 0.87 on a holdout test dataset, much better than established methods, such as atlas-based methods (DSC<0.7). The predicted contours agree with physician contours better than medical resident contours do. A reader study showed that the clinical acceptability of the automatically segmented CTV contours is equal to that of approved clinical contours manually drawn by physicians. Our deep learning model can accurately segment CTVs with the help of surrounding organ masks. Because the DL framework can outperform residents, it can be implemented practically in a clinical workflow to generate initial CTV contours or to guide residents in generating these contours for physicians to review and revise. Providing physicians with the 95% confidence bounds could streamline the review process for an efficient clinical workflow as this would enable physicians to concentrate their inspecting and editing efforts on the large uncertain areas.
在前列腺癌的术后放疗中,精确勾画需要照射的临床靶区(CTV)具有挑战性,因为癌变的前列腺已被手术切除,因此 CTV 包含肿瘤细胞的微观扩散,这些无法在 CT 或磁共振成像等临床图像中可视化。在当前的临床实践中,医生基于 CTV 与附近器官的关系和其他临床信息手动勾画 CTV,但这会导致医生间的差异较大。使用传统的图像分割方法对术后前列腺 CTV 进行自动分割的效果并不理想。我们提出使用深度学习来准确分割术后前列腺 CTV。所提出的模型使用经过临床批准并用于患者治疗的标签进行训练。为了分割 CTV,我们首先分割附近的器官,然后利用它们与 CTV 的关系来辅助 CTV 分割。为了缓解距离特征的编码问题,这些特征对于学习 CTV 轮廓与周围 OAR 的重叠以及与它们边界的距离都很重要,我们将距离预测作为辅助任务添加到 CTV 网络中。为了使 DL 模型适用于临床应用,我们使用蒙特卡罗失活(MCDO)来估计模型不确定性。使用 MCDO,我们估计并可视化每个预测的 95%置信区间的上限和下限,这可以为医生提供可能需要修正的区域的信息。在一个验证数据集上,所提出的模型的平均 Dice 相似系数(DSC)为 0.87,明显优于基于图谱的方法(DSC<0.7)等已有方法。预测的轮廓与医生的轮廓比与住院医生的轮廓更吻合。一项读者研究表明,自动分割的 CTV 轮廓的临床可接受性与医生手动绘制的经批准的临床轮廓相同。我们的深度学习模型可以在周围器官掩模的帮助下准确分割 CTV。由于深度学习框架可以优于住院医生,因此它可以在临床工作流程中实际实施,以生成初始 CTV 轮廓,或指导住院医生生成这些轮廓,以便医生进行审查和修改。为医生提供 95%置信区间可以简化审查过程,提高工作效率,因为这可以使医生将他们的检查和编辑工作集中在不确定度较大的区域。
Comput Med Imaging Graph. 2024-9
Chin Med J (Engl). 2025-8-5
Mach Learn Sci Technol. 2025-6-30
J Appl Clin Med Phys. 2024-10
J Med Imaging Radiat Oncol. 2024-12