• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

寻找具有类人认知和推理能力的可信且透明的智能系统。

In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities.

作者信息

Pal Nikhil R

机构信息

Indian Statistical Institute, Electronics and Communication Sciences Unit, The Centre for Artificial Intelligence and Machine Learning, Calcutta, India.

出版信息

Front Robot AI. 2020 Jun 19;7:76. doi: 10.3389/frobt.2020.00076. eCollection 2020.

DOI:10.3389/frobt.2020.00076
PMID:33501243
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7806014/
Abstract

At present we are witnessing a tremendous interest in Artificial Intelligence (AI), particularly in Deep Learning (DL)/Deep Neural Networks (DNNs). One of the reasons appears to be the unmatched performance achieved by such systems. This has resulted in an enormous hope on such techniques and often these are viewed as all-cure solutions. But most of these systems cannot explain why a particular decision is made (black box) and sometimes miserably fail in cases where other systems would not. Consequently, in critical applications such as healthcare and defense practitioners do not like to trust such systems. Although an AI system is often designed taking inspiration from the brain, there is not much attempt to exploit cues from the brain in true sense. In our opinion, to realize intelligent systems with human like reasoning ability, we need to exploit knowledge from the brain science. Here we discuss a few findings in brain science that may help designing intelligent systems. We explain the relevance of transparency, explainability, learning from a few examples, and the trustworthiness of an AI system. We also discuss a few ways that may help to achieve these attributes in a learning system.

摘要

目前,我们目睹了人们对人工智能(AI),尤其是深度学习(DL)/深度神经网络(DNN)的极大兴趣。其中一个原因似乎是此类系统所取得的无与伦比的性能。这引发了人们对这些技术的巨大期望,并且这些技术常常被视为万能解决方案。但大多数此类系统无法解释为何做出特定决策(黑箱问题),而且在其他系统不会失败的情况下,它们有时会惨败。因此,在医疗保健和国防等关键应用中,从业者不太愿意信任此类系统。尽管人工智能系统的设计常常从大脑获取灵感,但并没有太多真正从大脑中获取线索的尝试。我们认为,要实现具有类似人类推理能力的智能系统,我们需要利用脑科学知识。在此,我们讨论一些脑科学中的发现,这些发现可能有助于设计智能系统。我们解释了人工智能系统的透明度、可解释性、从少量示例学习以及可信度的相关性。我们还讨论了一些可能有助于在学习系统中实现这些特性的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2170/7806014/4316c6c7aca6/frobt-07-00076-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2170/7806014/4316c6c7aca6/frobt-07-00076-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2170/7806014/4316c6c7aca6/frobt-07-00076-g0001.jpg

相似文献

1
In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities.寻找具有类人认知和推理能力的可信且透明的智能系统。
Front Robot AI. 2020 Jun 19;7:76. doi: 10.3389/frobt.2020.00076. eCollection 2020.
2
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
3
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
4
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他
Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.
5
Trustworthy Artificial Intelligence in Medical Imaging.可信的医学影像人工智能。
PET Clin. 2022 Jan;17(1):1-12. doi: 10.1016/j.cpet.2021.09.007.
6
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
7
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
8
Explainable AI: A Review of Machine Learning Interpretability Methods.可解释人工智能:机器学习可解释性方法综述
Entropy (Basel). 2020 Dec 25;23(1):18. doi: 10.3390/e23010018.
9
Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.人工智能支持的医学图像分析在欧洲体外诊断法规背景下的可解释性和可归因性。
N Biotechnol. 2022 Sep 25;70:67-72. doi: 10.1016/j.nbt.2022.05.002. Epub 2022 May 6.
10
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.

本文引用的文献

1
Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex.深度卷积神经网络的激活与人类视觉皮层的伽马波段活动一致。
Commun Biol. 2018 Aug 8;1:107. doi: 10.1038/s42003-018-0110-y. eCollection 2018.
2
Visual Analytics for Explainable Deep Learning.用于可解释深度学习的可视化分析
IEEE Comput Graph Appl. 2018 Jul/Aug;38(4):84-92. doi: 10.1109/MCG.2018.042731661.
3
Towards deep learning with segregated dendrites.走向具有分离树突的深度学习。
Elife. 2017 Dec 5;6:e22901. doi: 10.7554/eLife.22901.
4
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.基于SpiNNaker机器的外侧膝状体的脉冲神经网络模型。
Front Neurosci. 2017 Aug 9;11:454. doi: 10.3389/fnins.2017.00454. eCollection 2017.
5
Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.深度神经网络:一种用于模拟生物视觉和大脑信息处理的新框架。
Annu Rev Vis Sci. 2015 Nov 24;1:417-446. doi: 10.1146/annurev-vision-082114-035447.
6
Visualizing the Hidden Activity of Artificial Neural Networks.可视化人工神经网络的隐藏活动。
IEEE Trans Vis Comput Graph. 2017 Jan;23(1):101-110. doi: 10.1109/TVCG.2016.2598838.
7
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
8
Interactive machine learning for health informatics: when do we need the human-in-the-loop?健康信息学中的交互式机器学习:何时需要人工介入?
Brain Inform. 2016 Jun;3(2):119-131. doi: 10.1007/s40708-016-0042-6. Epub 2016 Mar 2.
9
Towards Better Analysis of Deep Convolutional Neural Networks.深度学习卷积神经网络的分析方法研究进展
IEEE Trans Vis Comput Graph. 2017 Jan;23(1):91-100. doi: 10.1109/TVCG.2016.2598831. Epub 2016 Aug 9.
10
Using goal-driven deep learning models to understand sensory cortex.利用目标驱动的深度学习模型理解感觉皮层。
Nat Neurosci. 2016 Mar;19(3):356-65. doi: 10.1038/nn.4244.