Cole Matthew, Cant Callum, Ustek Spilda Funda, Graham Mark
Social Sciences Division, Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.
Front Artif Intell. 2022 Jul 15;5:869114. doi: 10.3389/frai.2022.869114. eCollection 2022.
Calls for "ethical Artificial Intelligence" are legion, with a recent proliferation of government and industry guidelines attempting to establish ethical rules and boundaries for this new technology. With few exceptions, they interpret Artificial Intelligence (AI) ethics narrowly in a liberal political framework of privacy concerns, transparency, governance and non-discrimination. One of the main hurdles to establishing "ethical AI" remains how to operationalize high-level principles such that they translate to technology design, development and use in the labor process. This is because organizations can end up interpreting ethics in an way with no oversight, treating ethics as simply another technological problem with technological solutions, and regulations have been largely detached from the issues AI presents for workers. There is a distinct lack of supra-national standards for fair, decent, or just AI in contexts where people depend on and work in tandem with it. Topics such as discrimination and bias in job allocation, surveillance and control in the labor process, and quantification of work have received significant attention, yet questions around AI and job quality and working conditions have not. This has left workers exposed to potential risks and harms of AI. In this paper, we provide a critique of relevant academic literature and policies related to AI ethics. We then identify a set of principles that could facilitate fairer working conditions with AI. As part of a broader research initiative with the Global Partnership on Artificial Intelligence, we propose a set of accountability mechanisms to ensure AI systems foster fairer working conditions. Such processes are aimed at reshaping the social impact of technology from the point of inception to set a research agenda for the future. As such, the key contribution of the paper is how to bridge from abstract ethical principles to operationalizable processes in the vast field of AI and new technology at work.
对“合乎道德的人工智能”的呼吁数不胜数,最近政府和行业指南大量涌现,试图为这项新技术确立道德规则和界限。几乎无一例外,它们在隐私担忧、透明度、治理和非歧视的自由政治框架内对人工智能伦理进行狭义解释。建立“合乎道德的人工智能”的主要障碍之一仍然是如何将高层次原则付诸实践,使其转化为技术设计、开发以及在劳动过程中的使用。这是因为组织最终可能会在没有监督的情况下以某种方式解释伦理道德,将伦理道德仅仅视为另一个可以用技术解决方案解决的技术问题,而且法规在很大程度上与人工智能给工人带来的问题脱节。在人们依赖人工智能并与之协同工作的背景下,明显缺乏关于公平、体面或公正的人工智能的超国家标准。诸如工作分配中的歧视和偏见、劳动过程中的监视和控制以及工作量化等话题受到了广泛关注,但围绕人工智能与工作质量和工作条件的问题却没有得到关注。这使得工人面临人工智能带来的潜在风险和危害。在本文中,我们对与人工智能伦理相关的学术文献和政策进行了批判。然后,我们确定了一套有助于利用人工智能创造更公平工作条件的原则。作为与全球人工智能合作组织更广泛研究计划的一部分,我们提出了一套问责机制,以确保人工智能系统促进更公平的工作条件。此类过程旨在从一开始就重塑技术的社会影响,为未来设定研究议程。因此,本文的关键贡献在于如何在人工智能和工作中的新技术这一广阔领域中,从抽象的伦理原则过渡到可操作的过程。