Pillai Malvika, Shumway John W, Adapa Karthik, Dooley John, McGurk Ross, Mazur Lukasz M, Das Shiva K, Chera Bhishamjit S
Carolina Health Informatics Program, University of North Carolina, Chapel Hill, North Carolina.
Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina.
Adv Radiat Oncol. 2023 Apr 6;8(6):101234. doi: 10.1016/j.adro.2023.101234. eCollection 2023 Nov-Dec.
Pretreatment quality assurance (QA) of treatment plans often requires a high cognitive workload and considerable time expenditure. This study explores the use of machine learning to classify pretreatment chart check QA for a given radiation plan as difficult or less difficult, thereby alerting the physicists to increase scrutiny on difficult plans.
Pretreatment QA data were collected for 973 cases between July 2018 and October 2020. The outcome variable, a degree of difficulty, was collected as a subjective rating by physicists who performed the pretreatment chart checks. Potential features were identified based on clinical relevance, contribution to plan complexity, and QA metrics. Five machine learning models were developed: support vector machine, random forest classifier, adaboost classifier, decision tree classifier, and neural network. These were incorporated into a voting classifier, where at least 2 algorithms needed to predict a case as difficult for it to be classified as such. Sensitivity analyses were conducted to evaluate feature importance.
The voting classifier achieved an overall accuracy of 77.4% on the test set, with 76.5% accuracy on difficult cases and 78.4% accuracy on less difficult cases. Sensitivity analysis showed features associated with plan complexity (number of fractions, dose per monitor unit, number of planning structures, and number of image sets) and clinical relevance (patient age) were sensitive across at least 3 algorithms.
This approach can be used to equitably allocate plans to physicists rather than randomly allocate them, potentially improving pretreatment chart check effectiveness by reducing errors propagating downstream.
治疗计划的预处理质量保证(QA)通常需要较高的认知工作量和大量的时间投入。本研究探索使用机器学习将给定放射治疗计划的预处理图表检查QA分类为困难或不太困难,从而提醒物理师对困难计划加强审查。
收集了2018年7月至2020年10月期间973例病例的预处理QA数据。结果变量,即困难程度,由进行预处理图表检查的物理师以主观评分的方式收集。基于临床相关性、对计划复杂性的贡献和QA指标确定潜在特征。开发了五种机器学习模型:支持向量机、随机森林分类器、自适应增强分类器、决策树分类器和神经网络。这些模型被纳入一个投票分类器,其中至少需要2种算法将一个病例预测为困难病例,该病例才能被分类为困难病例。进行敏感性分析以评估特征重要性。
投票分类器在测试集上的总体准确率为77.4%,在困难病例上的准确率为76.5%,在不太困难病例上的准确率为78.4%。敏感性分析表明,与计划复杂性相关的特征(分次次数、每监测单位剂量、计划结构数量和图像集数量)和临床相关性特征(患者年龄)在至少3种算法中是敏感的。
这种方法可用于公平地将计划分配给物理师,而不是随机分配,通过减少下游传播的错误,有可能提高预处理图表检查的有效性。