Suppr超能文献

基于集成的遗传算法解释器与自动化图像分割:黑色素瘤检测数据集的案例研究

Ensemble-based genetic algorithm explainer with automized image segmentation: A case study on melanoma detection dataset.

作者信息

Nematzadeh Hossein, García-Nieto José, Navas-Delgado Ismael, Aldana-Montes José F

机构信息

ITIS Software, Universidad de Málaga, Arquitecto Francisco Peñalosa 18, Malaga, 29071, Spain; Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, Malaga, Spain.

ITIS Software, Universidad de Málaga, Arquitecto Francisco Peñalosa 18, Malaga, 29071, Spain; Biomedical Research Institute of Málaga (IBIMA), Universidad de Málaga, Malaga, Spain; Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, Malaga, Spain.

出版信息

Comput Biol Med. 2023 Mar;155:106613. doi: 10.1016/j.compbiomed.2023.106613. Epub 2023 Feb 5.

Abstract

Explainable Artificial Intelligence (XAI) makes AI understandable to the human user particularly when the model is complex and opaque. Local Interpretable Model-agnostic Explanations (LIME) has an image explainer package that is used to explain deep learning models. The image explainer of LIME needs some parameters to be manually tuned by the expert in advance, including the number of top features to be seen and the number of superpixels in the segmented input image. This parameter tuning is a time-consuming task. Hence, with the aim of developing an image explainer that automizes image segmentation, this paper proposes Ensemble-based Genetic Algorithm Explainer (EGAE) for melanoma cancer detection that automatically detects and presents the informative sections of the image to the user. EGAE has three phases. First, the sparsity of chromosomes in GAs is determined heuristically. Then, multiple GAs are executed consecutively. However, the difference between these GAs are in different number of superpixels in the input image that result in different chromosome lengths. Finally, the results of GAs are ensembled using consensus and majority votings. This paper also introduces how Euclidean distance can be used to calculate the distance between the actual explanation (delineated by experts) and the calculated explanation (computed by the explainer) for accuracy measurement. Experimental results on a melanoma dataset show that EGAE automatically detects informative lesions, and it also improves the accuracy of explanation in comparison with LIME efficiently. The python codes for EGAE, the ground truths delineated by clinicians, and the melanoma detection dataset are available at https://github.com/KhaosResearch/EGAE.

摘要

可解释人工智能(XAI)使人工智能对于人类用户而言变得可理解,特别是当模型复杂且不透明时。局部可解释模型无关解释(LIME)有一个图像解释器包,用于解释深度学习模型。LIME的图像解释器需要专家预先手动调整一些参数,包括要查看的顶级特征数量以及分割后的输入图像中的超像素数量。这种参数调整是一项耗时的任务。因此,为了开发一种能自动进行图像分割的图像解释器,本文提出了用于黑色素瘤癌症检测的基于集成的遗传算法解释器(EGAE),它能自动检测图像中的信息部分并呈现给用户。EGAE有三个阶段。首先,通过启发式方法确定遗传算法中染色体的稀疏性。然后,连续执行多个遗传算法。然而,这些遗传算法之间的差异在于输入图像中不同数量的超像素,这导致染色体长度不同。最后,使用一致性投票和多数投票对遗传算法的结果进行集成。本文还介绍了如何使用欧几里得距离来计算实际解释(由专家划定)与计算出的解释(由解释器计算)之间的距离,以进行准确性测量。在黑色素瘤数据集上的实验结果表明,EGAE能自动检测出信息性病变,并且与LIME相比,它还能有效提高解释的准确性。EGAE的Python代码、临床医生划定的基本事实以及黑色素瘤检测数据集可在https://github.com/KhaosResearch/EGAE获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验