Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, Pisa, 56122, Italy.
Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino 1, Pisa, 56122, Italy.
Comput Med Imaging Graph. 2024 Oct;117:102433. doi: 10.1016/j.compmedimag.2024.102433. Epub 2024 Sep 11.
Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications. Explainable Artificial Intelligence (XAI) provides techniques for understanding models. However, current XAI is mostly data-driven and focused on addressing developers' requirements of improving models rather than clinical users' demands for expressing relevant insights. Among different XAI strategies, we propose a solution composed of Case-Based Reasoning paradigm to provide visual output explanations and Informed Deep Learning (IDL) to integrate medical knowledge within the system. A key aspect of our solution lies in its capability to handle data imperfections, including labeling inaccuracies and artifacts, thanks to an ensemble architecture on top of the deep learning (DL) workflow. We conducted several experimental benchmarks on a dataset collected in collaboration with medical centers. Our findings reveal that employing the IDL approach yields an accuracy of 85%, surpassing the 77% accuracy achieved by DL alone. Furthermore, we measured the human-centered explainability of the two approaches and IDL generates explanations more congruent with the clinical user demands.
口腔鳞状细胞癌的识别具有挑战性,因为其诊断较晚且数据采集成本高。一个具有成本效益的、计算机化的筛查系统对于早期疾病检测至关重要,可以最大限度地减少对专家干预和昂贵分析的需求。此外,透明度对于使这些系统与关键部门的应用程序保持一致至关重要。可解释人工智能 (XAI) 提供了理解模型的技术。然而,目前的 XAI 主要是数据驱动的,侧重于满足开发人员改进模型的需求,而不是满足临床用户表达相关见解的需求。在不同的 XAI 策略中,我们提出了一种解决方案,该方案由基于案例推理的范例组成,以提供视觉输出解释,并通过知情深度学习 (IDL) 将医学知识集成到系统中。我们解决方案的一个关键方面在于其能够处理数据缺陷,包括标记不准确和伪影,这要归功于深度学习 (DL) 工作流之上的集成架构。我们在与医疗中心合作收集的数据集上进行了几个实验基准测试。我们的研究结果表明,采用 IDL 方法可达到 85%的准确率,超过了单独使用 DL 达到的 77%的准确率。此外,我们还测量了这两种方法的以人为中心的可解释性,发现 IDL 生成的解释与临床用户的需求更一致。