Elliott Karen, Price Rob, Shaw Patricia, Spiliotopoulos Tasos, Ng Magdalene, Coopamootoo Kovila, van Moorsel Aad
School of Computing & Business School, Newcastle University, 1 Science Square, The Helix, Newcastle upon Tyne, NE4 5TG UK.
http://CorporateDigitalResponsibility.net (CDR), Alchemmy, 52-54 High Holborn, London, WC1V 6RL UK.
Society. 2021;58(3):179-188. doi: 10.1007/s12115-021-00594-8. Epub 2021 Jun 14.
In the digital era, we witness the increasing use of artificial intelligence (AI) to solve problems, while improving productivity and efficiency. Yet, inevitably costs are involved with delegating power to algorithmically based systems, some of whose workings are opaque and unobservable and thus termed the "black box". Central to understanding the "black box" is to acknowledge that the algorithm is not mendaciously undertaking this action; it is simply using the recombination afforded to scaled computable machine learning algorithms. But an algorithm with arbitrary precision can easily reconstruct those characteristics and make life-changing decisions, particularly in financial services (credit scoring, risk assessment, etc.), and it could be difficult to reconstruct, if this was done in a fair manner reflecting the values of society. If we permit AI to make life-changing decisions, what are the opportunity costs, data trade-offs, and implications for social, economic, technical, legal, and environmental systems? We find that over 160 ethical AI principles exist, advocating organisations to act responsibly to avoid causing digital societal harms. This maelstrom of guidance, none of which is compulsory, serves to confuse, as opposed to guide. We need to think carefully about how we implement these algorithms, the delegation of decisions and data usage, in the absence of human oversight and AI governance. The paper seeks to harmonise and align approaches, illustrating the opportunities and threats of AI, while raising awareness of Corporate Digital Responsibility (CDR) as a potential collaborative mechanism to demystify governance complexity and to establish an equitable digital society.
在数字时代,我们目睹了人工智能(AI)在解决问题方面的应用日益增多,同时提高了生产力和效率。然而,将权力委托给基于算法的系统不可避免地会涉及成本,其中一些系统的运作是不透明且不可观察的,因此被称为“黑匣子”。理解“黑匣子”的核心在于认识到算法并非故意采取这种行动;它只是在利用规模化可计算机器学习算法所提供的重组能力。但是,具有任意精度的算法可以轻松重构这些特征并做出改变生活的决策,尤其是在金融服务领域(信用评分、风险评估等),而且如果以公平反映社会价值观的方式进行重构可能会很困难。如果我们允许人工智能做出改变生活的决策,那么机会成本、数据权衡以及对社会、经济、技术、法律和环境系统的影响是什么呢?我们发现存在超过160条人工智能伦理原则,倡导组织负责任地行事,以避免造成数字社会危害。这种指导的大杂烩,没有一条是强制性的,起到的是混淆而非指导的作用。在缺乏人类监督和人工智能治理的情况下,我们需要仔细思考如何实施这些算法、决策委托和数据使用。本文旨在协调和统一各种方法,阐明人工智能的机遇和威胁,同时提高对企业数字责任(CDR)的认识,将其作为一种潜在的协作机制,以揭开治理复杂性的神秘面纱并建立一个公平的数字社会。