Zhao Hengrui, Liang Xiao, Meng Boyu, Dohopolski Michael, Choi Byongsu, Cai Bin, Lin Mu-Han, Bai Ti, Nguyen Dan, Jiang Steve
Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
Phys Imaging Radiat Oncol. 2024 Jul 14;31:100610. doi: 10.1016/j.phro.2024.100610. eCollection 2024 Jul.
Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.
We introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.
Our proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.
Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
准确且自动地分割靶区和危及器官(OARs)对于在线自适应放疗(ART)的成功临床应用至关重要。当前用于锥束计算机断层扫描(CBCT)自动分割的方法面临挑战,导致分割结果往往无法达到临床可接受性。当前CBCT自动分割方法忽视了初始计划和先前自适应分次中可用的丰富信息,而这些信息本可提高分割精度。
我们引入了一个新颖的框架,该框架整合了患者初始计划和先前自适应分次的数据,利用这些额外的时间背景信息显著提高当前分次CBCT图像的分割精度。我们提出了LSTM-UNet,这是一种创新架构,将长短期记忆(LSTM)单元集成到传统U-Net框架的跳跃连接中,以保留来自先前分次的信息。这些模型首先使用模拟数据进行预训练,然后在临床数据集上进行微调。
我们提出的模型对8个头颈部器官和靶区的分割预测平均骰子相似系数为79%,相比之下,一个没有先验知识的基线模型的该系数为52%,一个有先验知识但没有记忆功能的基线模型的该系数为78%。
我们提出的模型通过有效利用先前分次的信息超越了基线分割框架,从而减少了临床医生修改自动分割结果的工作量。此外,它与提供更好先验知识的基于配准的方法协同工作。我们的模型有望集成到在线ART工作流程中,在合成CT图像上提供精确的分割能力。