Suppr超能文献

前沿的正义:在医疗保健专业人员中培养对人工智能的有感问责制。

Justice at the Forefront: Cultivating felt accountability towards Artificial Intelligence among healthcare professionals.

机构信息

Research Center for Smarter Supply Chain, Business School, Soochow University, 50 Donghuan Road, Suzhou, 215006, China.

Sheffield University Management School, University of Sheffield, Conduit Rd, Sheffield, S10 1FL, United Kingdom.

出版信息

Soc Sci Med. 2024 Apr;347:116717. doi: 10.1016/j.socscimed.2024.116717. Epub 2024 Mar 6.

Abstract

The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.

摘要

人工智能的出现开创了患者护理的新时代,但随之而来的是围绕算法医疗决策的问责制的争议。在这场讨论中,存在着一系列观点,从将责任归咎于人工智能解决方案提供商到直接归咎于医疗保健专业人员。针对这一争论,本研究基于道德进化的互惠伴侣选择(MPC)模型,旨在为医疗保健专业人员培养对人工智能的责任感建立一个配置框架。该框架强调了两个关键条件:人工智能伦理执行和对人工智能的信任信念,并考虑了组织复杂性在该框架实施中的影响。本研究对 401 名医疗保健专业人员的样本进行了模糊集定性比较分析(fsQCA),结果表明:a)在人工智能伦理执行中关注正义和自主性,同时建立对人工智能可靠性和功能性的信任,会增强医疗保健专业人员对人工智能的责任感;b)为高复杂性医院培养对人工智能的责任感需要确保对其功能性的信任;c)为低复杂性医院,在人工智能伦理执行中优先考虑正义和对人工智能可靠性的信任是至关重要的。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验