Suppr超能文献

关于我们自己和我们创造的机器人的丑陋真相:偏见和社会不平等问题。

The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity.

机构信息

School of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.

School of Public Policy, Georgia Institute of Technology, 685 Cherry Street, Atlanta, GA, 30332-0345, USA.

出版信息

Sci Eng Ethics. 2018 Oct;24(5):1521-1536. doi: 10.1007/s11948-017-9975-2. Epub 2017 Sep 21.

Abstract

Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as "black teenagers" are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.

摘要

最近,人们对偏见及其对专业人工智能 (AI) 应用的影响给予了高度关注。有报道称,搜索引擎向男性而非女性提供高薪技术工作的职位发布信息,或者在输入“黑人青少年”等关键词时提供逮捕时的面部照片,这引发了关于种族主义和性别歧视的指控。学习算法在不断发展;它们通常是通过分析大量在线信息数据集并由众包人群赋予真实标签而创建的。这些专门的 AI 算法已经从研究人员和初创企业的头脑中解放出来,并被推向了公众。然而,这些算法虽然很智能,但仍保留着一些渗透到社会中的偏见。它们在数据集中发现反映隐含偏见的模式,并因此强调和强化这些偏见作为全球真相。本文描述了偏见如何渗透到当前的 AI 和机器人系统中,并可能影响这些系统未来设计的具体例子。更具体地说,我们提请注意偏见可能如何影响(1)机器人维和人员、(2)自动驾驶汽车和(3)医疗机器人的功能。最后,我们概述了可以采取哪些措施来减轻或阻止机器人技术中的偏见。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验