Department of Radiation Oncology, University of Washington Medical Center, 1959 NE Pacific Street, Box 356043, Seattle, Washington, 98195, USA.
Med Phys. 2018 Dec;45(12):5359-5365. doi: 10.1002/mp.13242. Epub 2018 Nov 8.
The review of a radiation therapy plan by a physicist prior to treatment is a standard tool for ensuring the quality of treatments. However, little is known about how well this task is performed in practice. The goal of this study is to present a novel method to measure the effectiveness of physics plan review by introducing simulated errors into computerized "mock" treatment charts and measuring the performance of plan review by physicists.
We generated six simulated treatment charts containing multiple errors. To select errors, we compiled a list based on events from a departmental incident learning system and an international incident learning system (SAFRON). Seventeen errors with the highest scores for frequency and severity were included in the simulations included six mock treatment charts. Eight physicists reviewed the simulated charts as they would a normal pretreatment plan review, with each chart being reviewed by at least six physicists. There were 113 data points for evaluation. Observer bias was minimized using a simple error vs hidden error approach, using detectability scores for stratification. The confidence interval for the proportion of errors detected was computed using the Wilson score interval.
Simulated errors were detected in 67% of reviews [58-75%] (95% confidence interval [CI] in brackets). Of the errors included in the simulated plans, the following error scenarios had the highest detection rates: an incorrect isocenter in DRR (93% [70-99%]), a planned dose different from the prescribed dose (92% [67-99%]) and invalid QA (85% [58-96%]). Errors with low detection rates included incorrect CT dataset (0%, [0-39%]) and incorrect isocenter localization in planning system (38% [18-64%]). Detection rates of errors from simulated charts were compared against observed detection rates of errors from a departmental incident learning system.
It has been notoriously difficult to quantify error and safety performance in oncology. This study uses a novel technique of simulated errors to quantify performance and suggests that the pretreatment physics plan review identifies some errors with high fidelity while other errors are more challenging to detect. These data will guide future work on standardization and automation. The example process studied here was physics plan review, but this approach of simulated errors may be applied in other contexts as well and may also be useful for training and education purposes.
在治疗前,由物理学家对放射治疗计划进行审查,这是确保治疗质量的标准工具。然而,对于该任务在实际中执行的效果如何,我们知之甚少。本研究的目的是通过在计算机化的“模拟”治疗图表中引入模拟误差,提出一种测量物理计划审查有效性的新方法,并通过物理学家来测量计划审查的性能。
我们生成了包含多个错误的六个模拟治疗图表。为了选择错误,我们根据部门事故学习系统和国际事故学习系统(SAFRON)中的事件编制了一份清单。在包括六个模拟治疗图表的模拟中,共包含了 17 个错误,这些错误的发生频率和严重程度得分最高。有 17 位物理学家对模拟图表进行了审查,就像对正常的预处理计划审查一样,每位物理学家至少要审查 6 份图表。评估共产生了 113 个数据点。使用简单的错误与隐藏错误方法,以及可检测性评分分层,将观察者偏倚降至最低。使用威尔逊分数区间计算检测到的错误比例的置信区间。
在 67%的审查中发现了模拟错误[58-75%](括号内为 95%置信区间)。在模拟计划中包含的错误中,以下错误场景具有最高的检测率:DRR 中的不正确等中心(93%[70-99%])、计划剂量与规定剂量不同(92%[67-99%])和无效 QA(85%[58-96%])。检测率较低的错误包括不正确的 CT 数据集(0%[0-39%])和计划系统中不正确的等中心定位(38%[18-64%])。模拟图表中的错误检测率与部门事故学习系统中的实际错误检测率进行了比较。
在肿瘤学领域,量化误差和安全性绩效一直是一个难题。本研究使用了一种新颖的模拟错误技术来量化性能,表明预处理物理计划审查能够以较高的保真度识别一些错误,而其他错误则更难以检测。这些数据将为未来的标准化和自动化工作提供指导。这里研究的过程是物理计划审查,但这种模拟错误的方法也可以应用于其他环境,并且对于培训和教育目的也可能是有用的。