• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能时代的透明度与精准度:对可解释性增强型推荐系统的评估

Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.

作者信息

Govea Jaime, Gutierrez Rommel, Villegas-Ch William

机构信息

Escuela de Ingeniería en Ciberseguridad, FICA, Universidad de Las Américas, Quito, Ecuador.

出版信息

Front Artif Intell. 2024 Sep 5;7:1410790. doi: 10.3389/frai.2024.1410790. eCollection 2024.

DOI:10.3389/frai.2024.1410790
PMID:39301478
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11410769/
Abstract

In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

摘要

在当今信息时代,推荐系统已成为向用户过滤和个性化海量数据流的重要工具。然而,这些系统日益增加的复杂性和不透明性引发了对透明度和用户信任的担忧。推荐缺乏可解释性可能导致决策失误以及对这些先进系统信心的下降。我们的研究通过将可解释性技术集成到推荐系统中来解决这一问题,以提高推荐的准确性及其透明度。我们在MovieLens和亚马逊数据集上实现并评估了推荐模型,应用诸如LIME和SHAP等可解释性方法来剖析模型决策。结果表明推荐的准确性有显著提高,用户理解和信任系统提供建议的能力也显著增强。例如,在纳入这些可解释性技术时,我们看到推荐准确性提高了3%,证明了它们在性能方面的附加价值并改善了用户体验。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/727df55389c2/frai-07-1410790-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/97b26ac49f66/frai-07-1410790-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/3abead426ef6/frai-07-1410790-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/7d877b98cd92/frai-07-1410790-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/8acf54e076ea/frai-07-1410790-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/0dd4707c687b/frai-07-1410790-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/8c93dfe47ed0/frai-07-1410790-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/727df55389c2/frai-07-1410790-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/97b26ac49f66/frai-07-1410790-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/3abead426ef6/frai-07-1410790-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/7d877b98cd92/frai-07-1410790-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/8acf54e076ea/frai-07-1410790-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/0dd4707c687b/frai-07-1410790-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/8c93dfe47ed0/frai-07-1410790-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd52/11410769/727df55389c2/frai-07-1410790-g007.jpg

相似文献

1
Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.人工智能时代的透明度与精准度:对可解释性增强型推荐系统的评估
Front Artif Intell. 2024 Sep 5;7:1410790. doi: 10.3389/frai.2024.1410790. eCollection 2024.
2
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
3
Integrating Explainable Machine Learning in Clinical Decision Support Systems: Study Involving a Modified Design Thinking Approach.将可解释机器学习集成到临床决策支持系统中:一项采用改进设计思维方法的研究。
JMIR Form Res. 2024 Apr 16;8:e50475. doi: 10.2196/50475.
4
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
5
Explainability in medicine in an era of AI-based clinical decision support systems.基于人工智能的临床决策支持系统时代的医学可解释性。
Front Genet. 2022 Sep 19;13:903600. doi: 10.3389/fgene.2022.903600. eCollection 2022.
6
Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model.心血管疾病检测的负责任 AI:迈向隐私保护和可解释的模型。
Comput Methods Programs Biomed. 2024 Sep;254:108289. doi: 10.1016/j.cmpb.2024.108289. Epub 2024 Jun 17.
7
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
8
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
9
Explainability does not improve biochemistry staff trust in artificial intelligence-based decision support.可解释性并未提高生物化学人员对基于人工智能的决策支持的信任。
Ann Clin Biochem. 2022 Nov;59(6):447-449. doi: 10.1177/00045632221128687. Epub 2022 Sep 22.
10
"Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations.仅仅是“准确性”吗?程序公平性要求在基于人工智能的医疗资源分配中具备可解释性。
AI Soc. 2022 Dec 21:1-12. doi: 10.1007/s00146-022-01614-9.

本文引用的文献

1
Explainable machine learning approach to predict extubation in critically ill ventilated patients: a retrospective study in central Taiwan.可解释机器学习方法预测重症机械通气患者拔管:台湾中部的回顾性研究。
BMC Anesthesiol. 2022 Nov 14;22(1):351. doi: 10.1186/s12871-022-01888-y.
2
Scrutinizing XAI using linear ground-truth data with suppressor variables.使用带有抑制变量的线性真实数据来审视可解释人工智能。
Mach Learn. 2022;111(5):1903-1923. doi: 10.1007/s10994-022-06167-y. Epub 2022 Apr 13.
3
A systematic literature review on spam content detection and classification.
关于垃圾邮件内容检测与分类的系统文献综述。
PeerJ Comput Sci. 2022 Jan 20;8:e830. doi: 10.7717/peerj-cs.830. eCollection 2022.
4
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
5
A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis.一种惩罚矩阵分解及其在稀疏主成分分析和典型相关分析中的应用。
Biostatistics. 2009 Jul;10(3):515-34. doi: 10.1093/biostatistics/kxp008. Epub 2009 Apr 17.