Luo Yi, Tseng Huan-Hsin, Cui Sunan, Wei Lise, Ten Haken Randall K, El Naqa Issam
Department of Radiation Oncology, University of Michigan, 519 W William Street, Ann Arbor, MI, USA.
BJR Open. 2019 Jul 4;1(1):20190021. doi: 10.1259/bjro.20190021. eCollection 2019.
Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes' prediction, but also needs to be made based on an informed understanding of the relationship among patients' characteristics, radiation response and treatment plans. As more patients' biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainable ML methods is an ultimate task for clinical practice but remains a challenging one. Towards complete explainability, the interpretability of ML approaches needs to be first explored. Hence, this review focuses on the application of ML techniques for clinical adoption in radiation oncology by balancing accuracy with interpretability of the predictive model of interest. An ML algorithm can be generally classified into an interpretable (IP) or non-interpretable (NIP) ("black box") technique. While the former may provide a clearer explanation to aid clinical decision-making, its prediction performance is generally outperformed by the latter. Therefore, great efforts and resources have been dedicated towards balancing the accuracy and the interpretability of ML approaches in ROP, but more still needs to be done. In this review, current progress to increase the accuracy for IP ML approaches is introduced, and major trends to improve the interpretability and alleviate the "black box" stigma of ML in radiation outcomes modeling are summarized. Efforts to integrate IP and NIP ML approaches to produce predictive models with higher accuracy and interpretability for ROP are also discussed.
放射治疗结果预测(ROP)在个性化处方和适应性放射治疗中起着重要作用。临床决策不仅可能取决于准确的放射治疗结果预测,还需要在充分了解患者特征、放射反应和治疗计划之间关系的基础上做出。随着越来越多患者的生物物理信息可用,机器学习(ML)技术在改善ROP方面将具有巨大潜力。创建可解释的ML方法是临床实践的最终任务,但仍然具有挑战性。为了实现完全可解释性,需要首先探索ML方法的可解释性。因此,本综述重点关注通过平衡感兴趣的预测模型的准确性和可解释性,将ML技术应用于放射肿瘤学的临床实践。ML算法通常可分为可解释(IP)或不可解释(NIP)(“黑箱”)技术。虽然前者可能提供更清晰的解释以辅助临床决策,但其预测性能通常不如后者。因此,人们付出了巨大努力和资源来平衡ROP中ML方法的准确性和可解释性,但仍有更多工作要做。在本综述中,介绍了提高IP ML方法准确性的当前进展,并总结了改善ML在放射治疗结果建模中的可解释性和减轻“黑箱”污名的主要趋势。还讨论了整合IP和NIP ML方法以生成具有更高准确性和可解释性的ROP预测模型的努力。