Räz Tim, Beisbart Claus
University of Bern, Institute of Philosophy, Länggassstrasse 49a, 3012 Bern, Switzerland.
Center for Artificial Intelligence in Medicine, University of Bern, Bern, Switzerland.
Erkenntnis. 2024;89(5):1823-1840. doi: 10.1007/s10670-022-00605-y. Epub 2022 Aug 7.
Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. Sullivan's claim hinges on which notion of understanding is at play. If we employ a weak notion of understanding, then her claim is tenable, but rather weak. If, however, we employ a strong notion of understanding, particularly explanatory understanding, then her claim is not tenable.
一些机器学习模型,尤其是深度神经网络(DNN),尚未被很好地理解;然而,它们在科学中却经常被使用。这种理解上的不足会给使用DNN来理解经验现象带来问题吗?艾米丽·沙利文最近认为,用DNN进行理解并不受我们对DNN本身缺乏理解的限制。在本文中,我们将论证,与沙利文的观点相反,我们目前对DNN的缺乏理解确实限制了我们用DNN进行理解的能力。沙利文的主张取决于所采用的理解概念。如果我们采用一种较弱的理解概念,那么她的主张是站得住脚的,但相当无力。然而,如果我们采用一种较强的理解概念,尤其是解释性理解,那么她的主张就站不住脚了。