Suppr超能文献

自动阿片类药物风险评分:医疗保健中机器学习导致认知不公正的一个案例。

Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.

作者信息

Pozzi Giorgia

机构信息

Faculty of Technology, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands.

出版信息

Ethics Inf Technol. 2023;25(1):3. doi: 10.1007/s10676-023-09676-z. Epub 2023 Jan 23.

Abstract

Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of . I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.

摘要

基于人工智能(AI)的技术,如机器学习(ML)系统,在医学和医疗保健领域正发挥着越来越重要的作用,带来了一些需要及时解决的新的伦理和认识论问题。尽管与认知问题相关的伦理问题一直是辩论的核心,但人们却没有注意到认知形式的不公正如何可能由机器学习引发,特别是在医疗保健领域。我分析了美国目前部署的一个用于预测患者阿片类药物成瘾和滥用可能性的机器学习系统(处方药品监测计划算法平台)的缺点。基于这一分析,我旨在表明,通过米兰达·弗里克对认知不公正的阐述,可以理解这些系统决策过程中对相关认知主体造成的伤害。我进一步认为,机器学习引发的诠释学不公正尤其有害,因为我将其定义为机器学习系统方面的一种自动诠释学挪用。如果机器学习系统在不允许人工监督的情况下确立意义和共享诠释学资源,损害参与医疗决策的利益相关者之间的理解和沟通实践,就会出现这种情况。此外,非常关键的是,如果医生在保护患者免受机器学习引发的诠释学不公正方面的可能性受到极大限制,就可以识别出一种自动诠释学挪用。总体而言,我的论文应扩展对机器学习系统引发的伦理问题的分析,这些问题本质上应被视为认知问题,从而有助于弥合当前辩论中这两个维度之间的差距。

相似文献

2
Epistemic Injustice and Illness.认知不公与疾病
J Appl Philos. 2017 Feb;34(2):172-190. doi: 10.1111/japp.12172. Epub 2016 Feb 8.
5
Epistemic Injustice and Nonmaleficence.认知不公正与不伤害。
J Bioeth Inq. 2023 Sep;20(3):447-456. doi: 10.1007/s11673-023-10273-4. Epub 2023 Jun 28.
6
Testimonial injustice in medical machine learning.医学机器学习中的见证偏见。
J Med Ethics. 2023 Aug;49(8):536-540. doi: 10.1136/jme-2022-108630. Epub 2023 Jan 12.
7
8
Epistemic injustice in healthcare: a philosophial analysis.医疗保健中的认知不公正:哲学分析
Med Health Care Philos. 2014 Nov;17(4):529-40. doi: 10.1007/s11019-014-9560-2.

引用本文的文献

4
The need for epistemic humility in AI-assisted pain assessment.人工智能辅助疼痛评估中认知谦逊的必要性。
Med Health Care Philos. 2025 Jun;28(2):339-349. doi: 10.1007/s11019-025-10264-9. Epub 2025 Mar 15.
6
Opportunities for incorporating intersectionality into biomedical informatics.将交叉性纳入生物医学信息学的机会。
J Biomed Inform. 2024 Jun;154:104653. doi: 10.1016/j.jbi.2024.104653. Epub 2024 May 10.
7
JAMIA at 30: looking back and forward.《美国医学信息学会杂志》创刊30周年:回顾与展望
J Am Med Inform Assoc. 2023 Dec 22;31(1):1-9. doi: 10.1093/jamia/ocad215.

本文引用的文献

1
Testimonial injustice in medical machine learning.医学机器学习中的见证偏见。
J Med Ethics. 2023 Aug;49(8):536-540. doi: 10.1136/jme-2022-108630. Epub 2023 Jan 12.
5
On the ethics of algorithmic decision-making in healthcare.论医疗保健中算法决策的伦理问题。
J Med Ethics. 2020 Mar;46(3):205-211. doi: 10.1136/medethics-2019-105586. Epub 2019 Nov 20.
7
8
A guide to deep learning in healthcare.深度学习在医疗保健中的应用指南。
Nat Med. 2019 Jan;25(1):24-29. doi: 10.1038/s41591-018-0316-z. Epub 2019 Jan 7.
9
Computer knows best? The need for value-flexibility in medical AI.计算机最懂?医疗 AI 需要价值灵活性。
J Med Ethics. 2019 Mar;45(3):156-160. doi: 10.1136/medethics-2018-105118. Epub 2018 Nov 22.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验