Author to whom any correspondence should be addressed.
Phys Med Biol. 2018 Feb 8;63(4):04NT01. doi: 10.1088/1361-6560/aaa731.
We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.
我们研究了一种基于阈值的优化方法,用于自动生成治疗计划,该计划受到 IMRT 治疗计划参考剂量体积直方图(DVH)的驱动。我们提出了一种基于阈值的参考自动计划优化框架(TORA)。常用的基于体素的二次惩罚有两个分量,用于惩罚体素的剂量不足和过量:参考剂量阈值和相关的惩罚权重。使用这种函数的常规手动和自动规划涉及迭代更新偏好权重,同时保持阈值不变,这是一种不直观且经常不一致的方法,用于规划某些参考 DVH。然而,通过阈值而不是偏好权重来驱动剂量分布可以用更少的计算工作量实现类似的计划。所提出的方法将参考 DVH 信息空间分配给阈值,并迭代地改进该分配的质量。该方法有效地处理了次优和不可行的 DVH。TORA 被应用于前列腺病例和肝脏病例作为概念验证。参考 DVH 使用基于体素的常规目标生成,然后改变为不可行或易于实现的。TORA 能够在解决简单凸子问题的 5-15 次迭代中,紧密地再现参考 DVH。随着剂量预测和基于知识的计划在临床环境中变得越来越普遍,以清晰、有效的方式将此类数据纳入治疗计划模型对于自动化计划至关重要。对于基于参考 DVH 的治疗计划,应该探索基于阈值的目标调整,而不是传统的更新偏好权重的方法。