• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能:黑加仑粉末中的机器学习解释。

Explainable AI: Machine Learning Interpretation in Blackcurrant Powders.

机构信息

Department of Dairy and Process Engineering, Faculty of Food Science and Nutrition, Poznań University of Life Sciences, 31 Wojska Polskiego St., 60-624 Poznan, Poland.

出版信息

Sensors (Basel). 2024 May 17;24(10):3198. doi: 10.3390/s24103198.

DOI:10.3390/s24103198
PMID:38794052
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11124776/
Abstract

Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the 'glass box' group of Decision Tree, among others, and the 'black box' group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.

摘要

最近,机器学习和深度学习的可解释性在研究领域以及人们的兴趣中成为一个重要领域,这既是由于人工智能 (AI) 方法的使用越来越多,也是由于对模型做出的决策的理解。人工智能的可解释性 (XAI) 归因于数据挖掘、错误消除和各种 AI 算法的学习性能等方面的意识不断增强。此外,XAI 将使模型在问题中做出的决策更加透明和有效。在这项研究中,提出了决策树等“玻璃箱”组和随机森林等“黑箱”组的模型,以理解选定类型的醋栗粉的识别。这些模型的学习过程是为了确定准确性指标,如准确性、精度、召回率和 F1 分数。使用局部可解释模型不可知解释 (LIMEs) 对模型进行可视化,以基于熵、对比度、相关性、相异性和同质性等纹理描述符预测识别特定类型的黑加仑粉的效果。袋装(Bagging_100)、决策树(DT0)和随机森林(RF7_gini)在醋栗粉可解释性框架中被证明是最有效的模型。Bagging_100 的分类器性能度量在准确性、精度、召回率和 F1 分数方面分别达到了 0.979 左右的值。相比之下,DT0 达到了 0.968、0.972、0.968 和 0.969,RF7_gini 达到了 0.963、0.964、0.963 和 0.963。这些模型的分类器性能度量均大于 96%。在未来,使用不可知模型的 XAI 可以成为帮助分析数据(包括食品产品)的另一个重要工具,甚至可以在线进行。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/8c4940e84e49/sensors-24-03198-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/71b3a6d1b0c3/sensors-24-03198-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/204a239dca13/sensors-24-03198-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/c394cfbeabf0/sensors-24-03198-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/73411d682ec8/sensors-24-03198-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/4ee18ca37d5d/sensors-24-03198-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/0388074fab0c/sensors-24-03198-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/f4d1442602ab/sensors-24-03198-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/8c4940e84e49/sensors-24-03198-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/71b3a6d1b0c3/sensors-24-03198-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/204a239dca13/sensors-24-03198-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/c394cfbeabf0/sensors-24-03198-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/73411d682ec8/sensors-24-03198-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/4ee18ca37d5d/sensors-24-03198-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/0388074fab0c/sensors-24-03198-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/f4d1442602ab/sensors-24-03198-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3510/11124776/8c4940e84e49/sensors-24-03198-g008.jpg

相似文献

1
Explainable AI: Machine Learning Interpretation in Blackcurrant Powders.可解释人工智能:黑加仑粉末中的机器学习解释。
Sensors (Basel). 2024 May 17;24(10):3198. doi: 10.3390/s24103198.
2
Efficiency of Identification of Blackcurrant Powders Using Classifier Ensembles.使用分类器集成识别黑加仑粉的效率
Foods. 2024 Feb 24;13(5):697. doi: 10.3390/foods13050697.
3
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
4
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
5
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
6
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
7
Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model.心血管疾病检测的负责任 AI:迈向隐私保护和可解释的模型。
Comput Methods Programs Biomed. 2024 Sep;254:108289. doi: 10.1016/j.cmpb.2024.108289. Epub 2024 Jun 17.
8
Toward explainable AI-empowered cognitive health assessment.迈向可解释人工智能赋能的认知健康评估。
Front Public Health. 2023 Mar 9;11:1024195. doi: 10.3389/fpubh.2023.1024195. eCollection 2023.
9
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.
10
Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI.使用带有可解释人工智能的SECNN-RF框架对阿尔茨海默病进行高级可解释诊断。
Front Artif Intell. 2024 Sep 2;7:1456069. doi: 10.3389/frai.2024.1456069. eCollection 2024.

引用本文的文献

1
Machine Learning in Sensory Analysis of Mead-A Case Study: Ensembles of Classifiers.机器学习在蜂蜜酒感官分析中的应用——一个案例研究:分类器集成
Molecules. 2025 Jul 30;30(15):3199. doi: 10.3390/molecules30153199.
2
Contrasting rule and machine learning based digital self triage systems in the USA.对比美国基于规则和机器学习的数字自我分诊系统。
NPJ Digit Med. 2024 Dec 27;7(1):381. doi: 10.1038/s41746-024-01367-3.

本文引用的文献

1
Efficiency of Identification of Blackcurrant Powders Using Classifier Ensembles.使用分类器集成识别黑加仑粉的效率
Foods. 2024 Feb 24;13(5):697. doi: 10.3390/foods13050697.
2
Artificial intelligence meets medical robotics.人工智能遇见医疗机器人。
Science. 2023 Jul 14;381(6654):141-146. doi: 10.1126/science.adj3312. Epub 2023 Jul 13.
3
Primer on Machine Learning in Electrophysiology.电生理学中的机器学习入门
Arrhythm Electrophysiol Rev. 2023 Mar 28;12:e06. doi: 10.15420/aer.2022.43. eCollection 2023.
4
Explainability of deep learning models in medical video analysis: a survey.医学视频分析中深度学习模型的可解释性:一项综述。
PeerJ Comput Sci. 2023 Mar 14;9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023.
5
Prediction of disease comorbidity using explainable artificial intelligence and machine learning techniques: A systematic review.利用可解释人工智能和机器学习技术预测疾病共病:系统评价。
Int J Med Inform. 2023 Jul;175:105088. doi: 10.1016/j.ijmedinf.2023.105088. Epub 2023 May 4.
6
Texture analysis and artificial neural networks for identification of cereals-case study: wheat, barley and rape seeds.纹理分析和人工神经网络在谷物识别中的应用——以小麦、大麦和油菜籽为例。
Sci Rep. 2022 Nov 11;12(1):19316. doi: 10.1038/s41598-022-23838-x.
7
Deep and Machine Learning Using SEM, FTIR, and Texture Analysis to Detect Polysaccharide in Raspberry Powders.使用 SEM、FTIR 和纹理分析进行深度学习和机器学习,以检测覆盆子粉末中的多糖。
Sensors (Basel). 2021 Aug 30;21(17):5823. doi: 10.3390/s21175823.
8
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.
9
Advanced machine-learning techniques in drug discovery.药物发现中的先进机器学习技术。
Drug Discov Today. 2021 Mar;26(3):769-777. doi: 10.1016/j.drudis.2020.12.003. Epub 2020 Dec 5.
10
Clinical applications of artificial intelligence in cardiology on the verge of the decade.人工智能在心脏病学中的临床应用即将进入十年。
Cardiol J. 2021;28(3):460-472. doi: 10.5603/CJ.a2020.0093. Epub 2020 Jul 10.