Maheshwari Kritika, Jedan Christoph, Christiaans Imke, van Gijn Mariëlle, Maeckelberghe Els, Plantinga Mirjam
Ethics and Philosophy of Technology Section, Department of Values, Technology and Innovation, Delft University of Technology, Delft, The Netherlands.
Ethics and Comparative Philosophy of Religion, Department of Christianity and the History of Ideas, Faculty of Religion, Culture and Society, University of Groningen, Groningen, The Netherlands.
Camb Q Healthc Ethics. 2024 Apr 29:1-15. doi: 10.1017/S0963180124000215.
This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public's ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded. To assess the applicability of these conditions, we explore a recent proposal for AI-inclusivity within the Dutch Newborn Screening Program. In doing so, we illustrate the importance, scope, and potential challenges of fostering and maintaining institutional epistemic trust in a context where generating, assessing, and providing reliable and timely screening results for genetic risk is of high priority. Finally, to motivate the general relevance of our discussion and case study, we end with suggestions for strategies, interventions, and measures for AI-inclusivity in healthcare more widely.
本文将机构认知信任作为一项重要的伦理考量因素,以指导医疗保健领域人工智能(AI)技术的负责任开发与实施(即AI包容性)。借鉴近期关于认知信任和公众对科学信任的文献,我们首先考察在何种条件下,我们能够对包含AI的医疗保健系统及其作为医疗信息和建议提供者的成员产生机构认知信任。具体而言,我们讨论了对包含AI的医疗保健的机构认知信任部分取决于包含AI的医疗实践和程序的可靠性、不同利益相关者对其的了解和认识、其对医疗专业人员认知和沟通职责及负担的影响,以及最后,其与公众伦理价值观和利益的互动与契合,以及包含AI的医疗保健系统所嵌入的背景社会政治条件。为评估这些条件的适用性,我们探讨了荷兰新生儿筛查计划中一项关于AI包容性的最新提议。在此过程中,我们阐明了在为遗传风险生成、评估和提供可靠且及时的筛查结果至关重要的背景下,培养和维持机构认知信任的重要性、范围及潜在挑战。最后,为说明我们讨论和案例研究的普遍相关性,我们以更广泛的医疗保健领域AI包容性的策略、干预措施和方法建议作为结尾。