Suppr超能文献

理解深度学习的重要性。

The Importance of Understanding Deep Learning.

作者信息

Räz Tim, Beisbart Claus

机构信息

University of Bern, Institute of Philosophy, Länggassstrasse 49a, 3012 Bern, Switzerland.

Center for Artificial Intelligence in Medicine, University of Bern, Bern, Switzerland.

出版信息

Erkenntnis. 2024;89(5):1823-1840. doi: 10.1007/s10670-022-00605-y. Epub 2022 Aug 7.

Abstract

Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. Sullivan's claim hinges on which notion of understanding is at play. If we employ a weak notion of understanding, then her claim is tenable, but rather weak. If, however, we employ a strong notion of understanding, particularly explanatory understanding, then her claim is not tenable.

摘要

一些机器学习模型,尤其是深度神经网络(DNN),尚未被很好地理解;然而,它们在科学中却经常被使用。这种理解上的不足会给使用DNN来理解经验现象带来问题吗?艾米丽·沙利文最近认为,用DNN进行理解并不受我们对DNN本身缺乏理解的限制。在本文中,我们将论证,与沙利文的观点相反,我们目前对DNN的缺乏理解确实限制了我们用DNN进行理解的能力。沙利文的主张取决于所采用的理解概念。如果我们采用一种较弱的理解概念,那么她的主张是站得住脚的,但相当无力。然而,如果我们采用一种较强的理解概念,尤其是解释性理解,那么她的主张就站不住脚了。

相似文献

1
The Importance of Understanding Deep Learning.
Erkenntnis. 2024;89(5):1823-1840. doi: 10.1007/s10670-022-00605-y. Epub 2022 Aug 7.
2
Are Deep Neural Networks Adequate Behavioral Models of Human Visual Perception?
Annu Rev Vis Sci. 2023 Sep 15;9:501-524. doi: 10.1146/annurev-vision-120522-031739. Epub 2023 Mar 31.
3
Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.
Top Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5.
4
Deep Neural Networks as Scientific Models.
Trends Cogn Sci. 2019 Apr;23(4):305-317. doi: 10.1016/j.tics.2019.01.009. Epub 2019 Feb 19.
5
Development of a Basic Educational Kit for Robotic System with Deep Neural Networks.
Sensors (Basel). 2021 May 31;21(11):3804. doi: 10.3390/s21113804.
6
DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.
Front Comput Neurosci. 2020 Nov 30;14:580632. doi: 10.3389/fncom.2020.580632. eCollection 2020.
7
Harnessing Deep Learning in Ecology: An Example Predicting Bark Beetle Outbreaks.
Front Plant Sci. 2019 Oct 28;10:1327. doi: 10.3389/fpls.2019.01327. eCollection 2019.
9
Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
Curr Opin Neurobiol. 2019 Apr;55:55-64. doi: 10.1016/j.conb.2019.01.007. Epub 2019 Feb 19.
10
Visual Genealogy of Deep Neural Networks.
IEEE Trans Vis Comput Graph. 2020 Nov;26(11):3340-3352. doi: 10.1109/TVCG.2019.2921323. Epub 2019 Jun 6.

引用本文的文献

1
Explainability Through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence.
Minds Mach (Dordr). 2025;35(3):35. doi: 10.1007/s11023-025-09738-9. Epub 2025 Jul 29.
2
Machine learning and the quest for objectivity in climate model parameterization.
Clim Change. 2023;176(8):101. doi: 10.1007/s10584-023-03532-1. Epub 2023 Jul 18.
3
Methods for identifying emergent concepts in deep neural networks.
Patterns (N Y). 2023 Jun 9;4(6):100761. doi: 10.1016/j.patter.2023.100761.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
The Conditional Entropy Bottleneck.
Entropy (Basel). 2020 Sep 8;22(9):999. doi: 10.3390/e22090999.
3
How could models possibly provide how-possibly explanations?
Stud Hist Philos Sci. 2019 Feb;73:22-33. doi: 10.1016/j.shpsa.2018.06.008. Epub 2018 Jun 29.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验