Bioemission Technology Solutions - BIOEMTECH, Athens, Greece; 3DMI Research Group, Department of Medical Physics, University of Patras, Rion GR 265 04, Greece.
University of Warsaw - Institute of Informatics, Warsaw, Poland.
Phys Med. 2021 Mar;83:108-121. doi: 10.1016/j.ejmp.2021.03.009. Epub 2021 Mar 22.
Over the last decade there has been an extensive evolution in the Artificial Intelligence (AI) field. Modern radiation oncology is based on the exploitation of advanced computational methods aiming to personalization and high diagnostic and therapeutic precision. The quantity of the available imaging data and the increased developments of Machine Learning (ML), particularly Deep Learning (DL), triggered the research on uncovering "hidden" biomarkers and quantitative features from anatomical and functional medical images. Deep Neural Networks (DNN) have achieved outstanding performance and broad implementation in image processing tasks. Lately, DNNs have been considered for radiomics and their potentials for explainable AI (XAI) may help classification and prediction in clinical practice. However, most of them are using limited datasets and lack generalized applicability. In this study we review the basics of radiomics feature extraction, DNNs in image analysis, and major interpretability methods that help enable explainable AI. Furthermore, we discuss the crucial requirement of multicenter recruitment of large datasets, increasing the biomarkers variability, so as to establish the potential clinical value of radiomics and the development of robust explainable AI models.
在过去的十年中,人工智能(AI)领域发生了广泛的演变。现代放射肿瘤学基于利用先进的计算方法,旨在实现个性化和高诊断及治疗精度。可用成像数据的数量增加,以及机器学习(ML),特别是深度学习(DL)的发展,促使人们从解剖学和功能医学图像中挖掘“隐藏”的生物标志物和定量特征。深度神经网络(DNN)在图像处理任务中取得了卓越的性能和广泛的应用。最近,DNN 已被用于放射组学,其在可解释 AI(XAI)方面的潜力可能有助于临床实践中的分类和预测。然而,它们中的大多数都使用有限的数据集,缺乏普遍适用性。在本研究中,我们回顾了放射组学特征提取、图像分析中的 DNN 以及主要可解释性方法的基础知识,这些方法有助于实现可解释 AI。此外,我们还讨论了需要多中心招募大型数据集、增加生物标志物变异性的关键要求,以便确定放射组学的潜在临床价值,并开发强大的可解释 AI 模型。