• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能系统开发过程中的悖论:使用智能可穿戴设备的企业健康计划用例。

The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.

作者信息

Angelucci Alessandra, Li Ziyue, Stoimenova Niya, Canali Stefano

机构信息

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy.

The Cologne Institute of Information Systems, Faculty of Management, Economics and Social Sciences, University of Cologne, Cologne, Germany.

出版信息

AI Soc. 2022 Sep 26:1-11. doi: 10.1007/s00146-022-01562-4.

DOI:10.1007/s00146-022-01562-4
PMID:36185063
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9511446/
Abstract

Artificial intelligence (AI) systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system's life cycle, the more influence they have over the way the system will function. This means that the impact on the fairness of the system is in the hands of those who are less impacted by it. However, most of the existing works ignore how different aspects of AI fairness are dynamically and adaptively affected by different stages of AI system development. To this end, we present a use case to discuss fairness in the development of corporate wellness programs using smart wearables and AI algorithms to analyze data. The four key stakeholders throughout this type of AI system development process are presented. These stakeholders are called service designer, algorithm designer, system deployer, and end-user. We identify three core aspects of AI fairness, namely, contextual fairness, model fairness, and device fairness. We propose a relative contribution of the four stakeholders to the three aspects of fairness. Furthermore, we propose the boundaries and interactions between the four roles, from which we make our conclusion about the possible unfairness in such an AI developing process.

摘要

人工智能(AI)系统已被广泛应用于各种场景,包括医疗保健、银行和司法系统中的高风险决策过程。一些已开发的人工智能模型未能为特定少数群体提供公平的输出结果,引发了关于人工智能公平性的全面讨论。我们认为,人工智能系统的发展存在一个核心悖论:在人工智能系统生命周期中,一个利益相关者的参与度越低,他们对系统运行方式的影响力就越大。这意味着对系统公平性的影响掌握在那些受其影响较小的人手中。然而,现有的大多数研究都忽略了人工智能公平性的不同方面是如何受到人工智能系统开发不同阶段的动态和适应性影响的。为此,我们提出一个用例,讨论使用智能可穿戴设备和人工智能算法分析数据的企业健康计划开发中的公平性问题。介绍了这类人工智能系统开发过程中的四个关键利益相关者。这些利益相关者被称为服务设计师、算法设计师、系统部署者和最终用户。我们确定了人工智能公平性的三个核心方面,即情境公平性、模型公平性和设备公平性。我们提出了四个利益相关者对公平性三个方面的相对贡献。此外,我们提出了这四个角色之间的界限和相互作用,据此得出关于这种人工智能开发过程中可能存在的不公平性的结论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/5a0bd3c1a977/146_2022_1562_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/a4d999b31305/146_2022_1562_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/5a0bd3c1a977/146_2022_1562_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/a4d999b31305/146_2022_1562_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b66/9511446/5a0bd3c1a977/146_2022_1562_Fig2_HTML.jpg

相似文献

1
The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.人工智能系统开发过程中的悖论:使用智能可穿戴设备的企业健康计划用例。
AI Soc. 2022 Sep 26:1-11. doi: 10.1007/s00146-022-01562-4.
2
Fairness of artificial intelligence in healthcare: review and recommendations.人工智能在医疗保健中的公平性:综述与建议。
Jpn J Radiol. 2024 Jan;42(1):3-15. doi: 10.1007/s11604-023-01474-3. Epub 2023 Aug 4.
3
Multidisciplinary considerations of fairness in medical AI: A scoping review.医疗人工智能公平性的多学科思考:范围综述。
Int J Med Inform. 2023 Oct;178:105175. doi: 10.1016/j.ijmedinf.2023.105175. Epub 2023 Aug 8.
4
Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health.解决全球卫生领域中人工智能和机器学习的公平性、偏见及合理使用问题。
Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. eCollection 2020.
5
A translational perspective towards clinical AI fairness.临床人工智能公平性的转化视角。
NPJ Digit Med. 2023 Sep 14;6(1):172. doi: 10.1038/s41746-023-00918-4.
6
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
7
Fairness as Equal Concession: Critical Remarks on Fair AI.作为平等让步的公平性:对公平人工智能的批判性评论
Sci Eng Ethics. 2021 Nov 22;27(6):73. doi: 10.1007/s11948-021-00348-z.
8
What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose?对于临床人工智能而言,公正意味着什么:局部公平性与适用性之间的冲突?
J Med Ethics. 2024 Feb 29. doi: 10.1136/jme-2023-109675.
9
Increasing clinical medical service satisfaction: An investigation into the impacts of Physicians' use of clinical decision-making support AI on patients' service satisfaction.提高临床医疗服务满意度:调查医生使用临床决策支持 AI 对患者服务满意度的影响。
Int J Med Inform. 2023 Aug;176:105107. doi: 10.1016/j.ijmedinf.2023.105107. Epub 2023 May 21.
10
Grading by AI makes me feel fairer? How different evaluators affect college students' perception of fairness.由人工智能进行评分让我感觉更公平?不同的评估者如何影响大学生对公平的认知。
Front Psychol. 2024 Feb 2;15:1221177. doi: 10.3389/fpsyg.2024.1221177. eCollection 2024.

引用本文的文献

1
Wearable devices for patient monitoring in the intensive care unit.用于重症监护病房患者监测的可穿戴设备。
Intensive Care Med Exp. 2025 Feb 27;13(1):26. doi: 10.1186/s40635-025-00738-8.
2
Digital technologies for step counting: between promises of reliability and risks of reductionism.用于步数计数的数字技术:在可靠性承诺与简化论风险之间
Front Digit Health. 2023 Dec 13;5:1330189. doi: 10.3389/fdgth.2023.1330189. eCollection 2023.
3
Wearable Technologies and Stress: Toward an Ethically Grounded Approach.可穿戴技术与压力:迈向基于伦理的方法。

本文引用的文献

1
Fairness as Equal Concession: Critical Remarks on Fair AI.作为平等让步的公平性:对公平人工智能的批判性评论
Sci Eng Ethics. 2021 Nov 22;27(6):73. doi: 10.1007/s11948-021-00348-z.
2
Smart wearable devices in cardiovascular care: where we are and how to move forward.智能可穿戴设备在心血管护理中的应用:现状与展望。
Nat Rev Cardiol. 2021 Aug;18(8):581-599. doi: 10.1038/s41569-021-00522-7. Epub 2021 Mar 4.
3
Pre-symptomatic detection of COVID-19 from smartwatch data.从智能手表数据中进行 COVID-19 的症状前检测。
Int J Environ Res Public Health. 2023 Sep 11;20(18):6737. doi: 10.3390/ijerph20186737.
Nat Biomed Eng. 2020 Dec;4(12):1208-1220. doi: 10.1038/s41551-020-00640-6. Epub 2020 Nov 18.
4
A Home Telemedicine System for Continuous Respiratory Monitoring.家庭远程医疗系统用于连续呼吸监测。
IEEE J Biomed Health Inform. 2021 Apr;25(4):1247-1256. doi: 10.1109/JBHI.2020.3012621. Epub 2021 Apr 6.
5
Feasible assessment of recovery and cardiovascular health: accuracy of nocturnal HR and HRV assessed via ring PPG in comparison to medical grade ECG.可行的恢复和心血管健康评估:通过指环式 PPG 评估夜间心率和心率变异性的准确性与医疗级 ECG 相比。
Physiol Meas. 2020 May 7;41(4):04NT01. doi: 10.1088/1361-6579/ab840a.
6
Telemonitoring systems for respiratory patients: technological aspects.远程监测呼吸系统疾病患者的系统:技术方面。
Pulmonology. 2020 Jul-Aug;26(4):221-232. doi: 10.1016/j.pulmoe.2019.11.006. Epub 2020 Jan 10.
7
Large-Scale Assessment of a Smartwatch to Identify Atrial Fibrillation.大规模评估智能手表以识别心房颤动。
N Engl J Med. 2019 Nov 14;381(20):1909-1917. doi: 10.1056/NEJMoa1901183.
8
Machine behaviour.机器行为。
Nature. 2019 Apr;568(7753):477-486. doi: 10.1038/s41586-019-1138-y. Epub 2019 Apr 24.
9
Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.公平预测与差异影响:累犯预测工具中的偏见研究。
Big Data. 2017 Jun;5(2):153-163. doi: 10.1089/big.2016.0047.
10
Health and Big Data: An Ethical Framework for Health Information Collection by Corporate Wellness Programs.健康与大数据:企业健康计划收集健康信息的伦理框架。
J Law Med Ethics. 2016 Sep;44(3):474-80. doi: 10.1177/1073110516667943.