• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于光谱区域的SHAP/LIME:通过分组特征分析增强光谱深度学习模型的可解释性。

Spectral Zones-Based SHAP/LIME: Enhancing Interpretability in Spectral Deep Learning Models Through Grouped Feature Analysis.

作者信息

Contreras Jhonatan, Winterfeld Andreea, Popp Juergen, Bocklitz Thomas

机构信息

Institute of Physical Chemistry (IPC) and Abbe Center of Photonics (ACP), Member of the Leibniz Centre for Photonics in Infection Research (LPI), Friedrich Schiller University Jena, Helmholtzweg 4, 07743 Jena, Germany.

Leibniz Institute of Photonic Technology, Member of the Leibniz Centre for Photonics in Infection Research (LPI), Member of Leibniz Health Technologies, Albert Einstein Straße 9, 07745 Jena, Germany.

出版信息

Anal Chem. 2024 Oct 1;96(39):15588-15597. doi: 10.1021/acs.analchem.4c02329. Epub 2024 Sep 17.

DOI:10.1021/acs.analchem.4c02329
PMID:39289923
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11447665/
Abstract

Interpretability is just as important as accuracy when it comes to complex models, especially in the context of deep learning models. Explainable artificial intelligence (XAI) approaches have been developed to address this problem. The literature on XAI for spectroscopy mainly emphasizes independent feature analysis with limited application of zone analysis. Individual feature analysis methods, such as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), have limitations due to their dependence on perturbations. These methods measure how AI models respond to sudden changes in the individual feature values. While they can help identify the most impactful features, the abrupt shifts introduced by replacing these values with zero or the expected ones may not accurately represent real-world scenarios. This can lead to mathematical and computational interpretations that are neither physically realistic nor intuitive to humans. Our proposed method does not rely on individual disturbances. Instead, it targets "spectral zones" to directly estimate the effect of group disturbances on a trained model. Consequently, factors such as sample size, hyperparameter selection, and other training-related considerations are not the primary focus of the XAI methods. To achieve this, we have developed a modified version of LIME and SHAP capable of performing group perturbations, enhancing explainability and realism while minimizing noise in the plots used for interpretability. Additionally, we employed an efficient approach to calculate spectral zones for complex spectra with indistinct spectral boundaries. Users can also define the zones themselves using their domain-specific knowledge.

摘要

对于复杂模型而言,可解释性与准确性同样重要,尤其是在深度学习模型的背景下。为了解决这一问题,人们开发了可解释人工智能(XAI)方法。关于光谱学的XAI文献主要强调独立特征分析,而区域分析的应用有限。诸如夏普利值加法解释(SHAP)和局部可解释模型无关解释(LIME)等个体特征分析方法存在局限性,因为它们依赖于扰动。这些方法衡量人工智能模型如何响应个体特征值的突然变化。虽然它们有助于识别最具影响力的特征,但用零或预期值替换这些值所引入的突然变化可能无法准确代表现实世界的情况。这可能导致数学和计算解释既不符合物理现实,对人类来说也不直观。我们提出的方法不依赖于个体扰动。相反,它针对“光谱区域”直接估计群体扰动对训练模型的影响。因此,样本大小、超参数选择和其他与训练相关的考虑因素并非XAI方法的主要关注点。为了实现这一点,我们开发了LIME和SHAP的改进版本,能够进行群体扰动,在提高可解释性和现实性的同时,尽量减少用于可解释性的图中的噪声。此外,我们采用了一种有效的方法来计算光谱边界不清晰的复杂光谱的光谱区域。用户也可以利用其特定领域的知识自行定义区域。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/8d590aceef61/ac4c02329_0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/4b6ef9413859/ac4c02329_0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/228ffd050b50/ac4c02329_0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/9881b9cb9189/ac4c02329_0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/f3833d44f767/ac4c02329_0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/75180b4c2a55/ac4c02329_0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/25b72ebe1156/ac4c02329_0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/8d590aceef61/ac4c02329_0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/4b6ef9413859/ac4c02329_0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/228ffd050b50/ac4c02329_0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/9881b9cb9189/ac4c02329_0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/f3833d44f767/ac4c02329_0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/75180b4c2a55/ac4c02329_0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/25b72ebe1156/ac4c02329_0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0f0f/11447665/8d590aceef61/ac4c02329_0007.jpg

相似文献

1
Spectral Zones-Based SHAP/LIME: Enhancing Interpretability in Spectral Deep Learning Models Through Grouped Feature Analysis.基于光谱区域的SHAP/LIME:通过分组特征分析增强光谱深度学习模型的可解释性。
Anal Chem. 2024 Oct 1;96(39):15588-15597. doi: 10.1021/acs.analchem.4c02329. Epub 2024 Sep 17.
2
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
3
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.模型无关可解释人工智能框架在肿瘤学中的应用:一项叙述性综述
Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626.
4
Explainable artificial intelligence for spectroscopy data: a review.光谱数据的可解释人工智能综述
Pflugers Arch. 2025 Apr;477(4):603-615. doi: 10.1007/s00424-024-02997-y. Epub 2024 Aug 1.
5
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
6
Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique.使用可解释人工智能(XAI)技术剖析食品配送服务评论的深度学习模型
Foods. 2022 Jul 8;11(14):2019. doi: 10.3390/foods11142019.
7
Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection.解读人工智能模型:关于局部可解释模型无关性解释(LIME)和SHapley值解释(SHAP)在阿尔茨海默病检测中应用的系统综述
Brain Inform. 2024 Apr 5;11(1):10. doi: 10.1186/s40708-024-00222-1.
8
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
9
Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model.心血管疾病检测的负责任 AI:迈向隐私保护和可解释的模型。
Comput Methods Programs Biomed. 2024 Sep;254:108289. doi: 10.1016/j.cmpb.2024.108289. Epub 2024 Jun 17.
10
Interpretable AI for bio-medical applications.用于生物医学应用的可解释人工智能。
Complex Eng Syst. 2022 Dec;2(4). doi: 10.20517/ces.2022.41. Epub 2022 Dec 28.

引用本文的文献

1
A Prediction Model of Stable Warfarin Doses in Patients After Mechanical Heart Valve Replacement Based on a Machine Learning Algorithm.基于机器学习算法的机械心脏瓣膜置换术后患者华法林稳定剂量预测模型
Rev Cardiovasc Med. 2025 Jun 26;26(6):33425. doi: 10.31083/RCM33425. eCollection 2025 Jun.

本文引用的文献

1
Recent advances of Raman spectroscopy for the analysis of bacteria.用于细菌分析的拉曼光谱学的最新进展。
Anal Sci Adv. 2023 Mar 27;4(3-4):81-95. doi: 10.1002/ansa.202200066. eCollection 2023 May.
2
Explainable AI for unveiling deep learning pollen classification model based on fusion of scattered light patterns and fluorescence spectroscopy.基于散射光模式与荧光光谱融合的深度学习花粉分类模型的可解释人工智能。
Sci Rep. 2023 Feb 24;13(1):3205. doi: 10.1038/s41598-023-30064-6.
3
Explainable artificial intelligence model to predict brain states from fNIRS signals.
用于从功能近红外光谱(fNIRS)信号预测脑状态的可解释人工智能模型。
Front Hum Neurosci. 2023 Jan 19;16:1029784. doi: 10.3389/fnhum.2022.1029784. eCollection 2022.
4
Combining Raman spectroscopy and machine learning to assist early diagnosis of gastric cancer.结合拉曼光谱和机器学习辅助胃癌的早期诊断。
Spectrochim Acta A Mol Biomol Spectrosc. 2023 Feb 15;287(Pt 1):122049. doi: 10.1016/j.saa.2022.122049. Epub 2022 Oct 28.
5
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review.可解释人工智能方法在抗击流行病中的应用:系统综述。
IEEE Rev Biomed Eng. 2023;16:5-21. doi: 10.1109/RBME.2022.3185953. Epub 2023 Jan 5.
6
Laser tweezers Raman spectroscopy combined with deep learning to classify marine bacteria.激光镊子拉曼光谱结合深度学习用于海洋细菌分类。
Talanta. 2022 Jul 1;244:123383. doi: 10.1016/j.talanta.2022.123383. Epub 2022 Mar 16.
7
Feasibility of In-Line Raman Spectroscopy for Quality Assessment in Food Industry: How Fast Can We Go?在线拉曼光谱技术在食品工业质量评估中的可行性:我们能有多快?
Appl Spectrosc. 2022 May;76(5):559-568. doi: 10.1177/00037028211056931. Epub 2022 Feb 25.
8
Interpreting convolutional neural network for real-time volatile organic compounds detection and classification using optical emission spectroscopy of plasma.利用等离子体发射光谱对卷积神经网络进行实时挥发性有机化合物检测和分类的解释。
Anal Chim Acta. 2021 Sep 22;1179:338822. doi: 10.1016/j.aca.2021.338822. Epub 2021 Jul 3.
9
Application of Laser-Induced, Deep UV Raman Spectroscopy and Artificial Intelligence in Real-Time Environmental Monitoring-Solutions and First Results.激光诱导深紫外拉曼光谱和人工智能在实时环境监测中的应用——解决方案和初步结果。
Sensors (Basel). 2021 Jun 5;21(11):3911. doi: 10.3390/s21113911.
10
Surface-Enhanced Raman Spectroscopy for Environmental Monitoring of Aerosols.用于气溶胶环境监测的表面增强拉曼光谱技术
ACS Omega. 2021 Apr 6;6(15):10150-10159. doi: 10.1021/acsomega.1c00207. eCollection 2021 Apr 20.