Suppr超能文献

可解释模式下从脑部磁共振成像中提取深度特征:DGXAINet。

Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet.

作者信息

Taşcı Burak

机构信息

Vocational School of Technical Sciences, Firat University, Elazig 23119, Turkey.

出版信息

Diagnostics (Basel). 2023 Feb 23;13(5):859. doi: 10.3390/diagnostics13050859.

Abstract

Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaining, and analyzing deep learning models, has increased recently. With explainable artificial intelligence, it is possible to understand whether the solutions offered by deep learning techniques are safe. This paper aims to diagnose a fatal disease such as a brain tumor faster and more accurately using XAI methods. In this study, we preferred datasets that are widely used in the literature, such as the four-class kaggle brain tumor dataset (Dataset I) and the three-class figshare brain tumor dataset (Dataset II). To extract features, a pre-trained deep learning model is chosen. DenseNet201 is used as the feature extractor in this case. The proposed automated brain tumor detection model includes five stages. First, training of brain MR images with DenseNet201, the tumor area was segmented with GradCAM. The features were extracted from DenseNet201 trained using the exemplar method. Extracted features were selected with iterative neighborhood component (INCA) feature selector. Finally, the selected features were classified using support vector machine (SVM) with 10-fold cross-validation. An accuracy of 98.65% and 99.97%, were obtained for Datasets I and II, respectively. The proposed model obtained higher performance than the state-of-the-art methods and can be used to aid radiologists in their diagnosis.

摘要

人工智能模型并未提供关于预测结果具体是如何得出的信息。这种缺乏透明度的情况是一个主要缺点。特别是在医疗应用中,对可解释人工智能(XAI)的兴趣近来有所增加,可解释人工智能有助于开发可视化、解释和分析深度学习模型的方法。借助可解释人工智能,能够了解深度学习技术提供的解决方案是否安全。本文旨在使用可解释人工智能方法更快、更准确地诊断诸如脑肿瘤等致命疾病。在本研究中,我们选用了文献中广泛使用的数据集,比如四类的kaggle脑肿瘤数据集(数据集I)和三类的figshare脑肿瘤数据集(数据集II)。为了提取特征,选择了一个预训练的深度学习模型。在这种情况下,使用DenseNet201作为特征提取器。所提出的自动脑肿瘤检测模型包括五个阶段。首先,使用DenseNet201对脑部磁共振图像进行训练,用GradCAM分割肿瘤区域。从使用范例方法训练的DenseNet201中提取特征。使用迭代邻域成分(INCA)特征选择器选择提取的特征。最后,使用支持向量机(SVM)并采用10折交叉验证对所选特征进行分类。数据集I和数据集II分别获得了98.65%和99.97%的准确率。所提出的模型比现有方法具有更高的性能,可用于辅助放射科医生进行诊断。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9713/10000758/2957b385ca66/diagnostics-13-00859-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验