• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

回拉丛与学习几何

Pullback Bundles and the Geometry of Learning.

作者信息

Puechmorel Stéphane

机构信息

ENAC (École Nationale de l'Aviation Civile), Université de Toulouse, 7, Avenue Edouard Belin, 31055 Toulouse, France.

出版信息

Entropy (Basel). 2023 Oct 15;25(10):1450. doi: 10.3390/e25101450.

DOI:10.3390/e25101450
PMID:37895571
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10606266/
Abstract

Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, few theoretical tools are known that can be used for this purpose. Using the Fisher Information Metric (FIM) on the output space yields interesting indicators in both the input and parameter spaces, but the underlying geometry is not yet fully understood. In this work, an approach based on the pullback bundle, a well-known trick for describing bundle morphisms, is introduced and applied to the encoder-decoder block. With constant rank hypothesis on the derivative of the network with respect to its inputs, a description of its behavior is obtained. Further generalization is gained through the introduction of the pullback generalized bundle that takes into account the sensitivity with respect to weights.

摘要

可解释人工智能(XAI)和可接受人工智能是机器学习领域的热门研究课题。对于关键应用而言,能够证明算法的正确性,或者至少以高概率确保其正确性至关重要。然而在实践中,几乎没有已知的理论工具可用于此目的。在输出空间上使用费希尔信息度量(FIM)会在输入空间和参数空间中产生有趣的指标,但底层几何结构尚未完全理解。在这项工作中,引入了一种基于拉回丛(一种描述丛态射的著名技巧)的方法,并将其应用于编码器 - 解码器模块。基于网络关于其输入的导数的恒定秩假设,获得了对其行为的描述。通过引入考虑权重敏感性的拉回广义丛,实现了进一步的推广。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb71/10606266/4391d1bea731/entropy-25-01450-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb71/10606266/96a83246dc40/entropy-25-01450-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb71/10606266/4391d1bea731/entropy-25-01450-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb71/10606266/96a83246dc40/entropy-25-01450-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb71/10606266/4391d1bea731/entropy-25-01450-g002.jpg

相似文献

1
Pullback Bundles and the Geometry of Learning.回拉丛与学习几何
Entropy (Basel). 2023 Oct 15;25(10):1450. doi: 10.3390/e25101450.
2
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
3
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI.通过 IKE-XAI 解释人工智能中的顿悟时刻:可解释人工智能的隐式知识提取。
Neural Netw. 2022 Nov;155:95-118. doi: 10.1016/j.neunet.2022.08.002. Epub 2022 Aug 6.
4
Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI).使用可解释人工智能(XAI)解释空气处理单元故障分类器的可解释性和透明度。
Sensors (Basel). 2022 Aug 23;22(17):6338. doi: 10.3390/s22176338.
5
Explainable Artificial Intelligence for Predictive Modeling in Healthcare.用于医疗保健预测建模的可解释人工智能
J Healthc Inform Res. 2022 Feb 11;6(2):228-239. doi: 10.1007/s41666-022-00114-1. eCollection 2022 Jun.
6
Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes?可解释人工智能在药物警戒中的应用:在预测不良结局时,哪些特征是重要的?
Comput Methods Programs Biomed. 2021 Nov;212:106415. doi: 10.1016/j.cmpb.2021.106415. Epub 2021 Sep 26.
7
Algebraic geometrical methods for hierarchical learning machines.用于分层学习机器的代数几何方法。
Neural Netw. 2001 Oct;14(8):1049-60. doi: 10.1016/s0893-6080(01)00069-7.
8
Non-human primate epidural ECoG analysis using explainable deep learning technology.使用可解释深度学习技术对非人类灵长类动物硬膜外 ECoG 进行分析。
J Neural Eng. 2021 Nov 25;18(6). doi: 10.1088/1741-2552/ac3314.
9
Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review.放射学和核医学中的可解释人工智能(XAI):文献综述
Front Med (Lausanne). 2023 May 12;10:1180773. doi: 10.3389/fmed.2023.1180773. eCollection 2023.
10
Interpretation of ensemble learning to predict water quality using explainable artificial intelligence.使用可解释人工智能对集成学习预测水质进行解读。
Sci Total Environ. 2022 Aug 1;832:155070. doi: 10.1016/j.scitotenv.2022.155070. Epub 2022 Apr 6.

本文引用的文献

1
A Novel Encoder-Decoder Model for Multivariate Time Series Forecasting.一种用于多变量时间序列预测的新型编解码器模型。
Comput Intell Neurosci. 2022 Apr 14;2022:5596676. doi: 10.1155/2022/5596676. eCollection 2022.
2
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.