de Laat Paul B
University of Groningen, Groningen, Netherlands.
Philos Technol. 2018;31(4):525-541. doi: 10.1007/s13347-017-0293-z. Epub 2017 Nov 12.
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves ("gaming the system" in particular), the potential loss of companies' competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.
由机器学习开发的算法辅助的决策越来越多地决定着我们的生活。不幸的是,整个过程完全不透明是常态。透明度是否会有助于恢复此类系统的问责制,正如人们常认为的那样?本文探讨了对完全透明的一些反对意见:数据集公开时隐私的丧失、算法本身披露的不良影响(特别是“操纵系统”)、公司竞争优势的潜在丧失,以及由于复杂算法通常本质上不透明而预期在可问责性方面的有限收益。结论是,至少目前,仅对监督机构完全透明是唯一可行的选择;将其扩展至广大公众通常并不可取。此外,有人认为算法决策最好应变得更易于理解;为此,所采用的机器学习模型应要么事后可解释,要么事前设计即可解释。