Suppr超能文献

人工智能系统开发过程中的悖论:使用智能可穿戴设备的企业健康计划用例。

The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.

作者信息

Angelucci Alessandra, Li Ziyue, Stoimenova Niya, Canali Stefano

机构信息

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy.

The Cologne Institute of Information Systems, Faculty of Management, Economics and Social Sciences, University of Cologne, Cologne, Germany.

出版信息

AI Soc. 2022 Sep 26:1-11. doi: 10.1007/s00146-022-01562-4.

Abstract

Artificial intelligence (AI) systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system's life cycle, the more influence they have over the way the system will function. This means that the impact on the fairness of the system is in the hands of those who are less impacted by it. However, most of the existing works ignore how different aspects of AI fairness are dynamically and adaptively affected by different stages of AI system development. To this end, we present a use case to discuss fairness in the development of corporate wellness programs using smart wearables and AI algorithms to analyze data. The four key stakeholders throughout this type of AI system development process are presented. These stakeholders are called service designer, algorithm designer, system deployer, and end-user. We identify three core aspects of AI fairness, namely, contextual fairness, model fairness, and device fairness. We propose a relative contribution of the four stakeholders to the three aspects of fairness. Furthermore, we propose the boundaries and interactions between the four roles, from which we make our conclusion about the possible unfairness in such an AI developing process.

摘要

人工智能(AI)系统已被广泛应用于各种场景,包括医疗保健、银行和司法系统中的高风险决策过程。一些已开发的人工智能模型未能为特定少数群体提供公平的输出结果,引发了关于人工智能公平性的全面讨论。我们认为,人工智能系统的发展存在一个核心悖论:在人工智能系统生命周期中,一个利益相关者的参与度越低,他们对系统运行方式的影响力就越大。这意味着对系统公平性的影响掌握在那些受其影响较小的人手中。然而,现有的大多数研究都忽略了人工智能公平性的不同方面是如何受到人工智能系统开发不同阶段的动态和适应性影响的。为此,我们提出一个用例,讨论使用智能可穿戴设备和人工智能算法分析数据的企业健康计划开发中的公平性问题。介绍了这类人工智能系统开发过程中的四个关键利益相关者。这些利益相关者被称为服务设计师、算法设计师、系统部署者和最终用户。我们确定了人工智能公平性的三个核心方面,即情境公平性、模型公平性和设备公平性。我们提出了四个利益相关者对公平性三个方面的相对贡献。此外,我们提出了这四个角色之间的界限和相互作用,据此得出关于这种人工智能开发过程中可能存在的不公平性的结论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/a4d999b31305/146_2022_1562_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验