Koshiyama Adriano, Kazim Emre, Treleaven Philip, Rai Pete, Szpruch Lukasz, Pavey Giles, Ahamat Ghazi, Leutner Franziska, Goebel Randy, Knight Andrew, Adams Janet, Hitrova Christina, Barnett Jeremy, Nachev Parashkev, Barber David, Chamorro-Premuzic Tomas, Klemmer Konstantin, Gregorovic Miro, Khan Shakeel, Lomas Elizabeth, Hilliard Airlie, Chatterjee Siddhant
Department of Computer Science, University College London, London WC1E 6EA, UK.
Holistic AI, London W1D 3QH, UK.
R Soc Open Sci. 2024 May 15;11(5):230859. doi: 10.1098/rsos.230859. eCollection 2024 May.
-Business reliance on algorithms is becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include Google's AI algorithm for photo classification mistakenly labelling a black couple as gorillas in 2015 (Gebru 2020 In , pp. 251-269), Microsoft's AI chatbot Tay that spread racist, sexist and antisemitic speech on Twitter (now X) (Wolf . 2017 . , 54-64 (doi:10.1145/3144592.3144598)), and Amazon's AI recruiting tool being scrapped after showing bias against women. In response, governments are legislating and imposing bans, regulators fining companies and the judiciary discussing potentially making algorithms artificial 'persons' in law. As with financial audits, governments, business and society will require algorithm audits; formal assurance that algorithms are legal, ethical and safe. A new industry is envisaged: Auditing and Assurance of Algorithms (cf. data privacy), with the remit to professionalize and industrialize AI, ML and associated algorithms. The stakeholders range from those working on policy/regulation to industry practitioners and developers. We also anticipate the nature and scope of the auditing levels and framework presented will inform those interested in systems of governance and compliance with regulation/standards. Our goal in this article is to survey the key areas necessary to perform auditing and assurance and instigate the debate in this novel area of research and practice.
企业对算法的依赖正变得无处不在,公司越来越担心其算法会造成重大财务或声誉损害。一些备受瞩目的案例包括:谷歌用于照片分类的人工智能算法在2015年误将一对黑人夫妇标记为大猩猩(格布鲁,2020年,第251 - 269页);微软的人工智能聊天机器人泰在推特(现为X)上传播种族主义、性别歧视和反犹言论(沃尔夫,2017年,第54 - 64页,doi:10.1145/3144592.3144598);以及亚马逊的人工智能招聘工具因显示出对女性的偏见而被废弃。作为回应,政府正在立法并实施禁令,监管机构对公司进行罚款,司法机构也在讨论是否可能在法律上使算法成为人工“人”。与财务审计一样,政府、企业和社会将需要算法审计;即正式保证算法是合法、合乎道德且安全的。设想了一个新的行业:算法审计与保证(参照数据隐私),其职责是使人工智能、机器学习及相关算法专业化和产业化。利益相关者包括从从事政策/监管工作的人员到行业从业者和开发者。我们还预计,所提出的审计级别和框架的性质与范围将为那些对治理体系以及遵守法规/标准感兴趣的人提供信息。本文的目标是调查进行审计与保证所需的关键领域,并在这个全新的研究与实践领域引发讨论。