Pozzi Giorgia
Faculty of Technology, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands.
Ethics Inf Technol. 2023;25(1):3. doi: 10.1007/s10676-023-09676-z. Epub 2023 Jan 23.
Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of . I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.
基于人工智能(AI)的技术,如机器学习(ML)系统,在医学和医疗保健领域正发挥着越来越重要的作用,带来了一些需要及时解决的新的伦理和认识论问题。尽管与认知问题相关的伦理问题一直是辩论的核心,但人们却没有注意到认知形式的不公正如何可能由机器学习引发,特别是在医疗保健领域。我分析了美国目前部署的一个用于预测患者阿片类药物成瘾和滥用可能性的机器学习系统(处方药品监测计划算法平台)的缺点。基于这一分析,我旨在表明,通过米兰达·弗里克对认知不公正的阐述,可以理解这些系统决策过程中对相关认知主体造成的伤害。我进一步认为,机器学习引发的诠释学不公正尤其有害,因为我将其定义为机器学习系统方面的一种自动诠释学挪用。如果机器学习系统在不允许人工监督的情况下确立意义和共享诠释学资源,损害参与医疗决策的利益相关者之间的理解和沟通实践,就会出现这种情况。此外,非常关键的是,如果医生在保护患者免受机器学习引发的诠释学不公正方面的可能性受到极大限制,就可以识别出一种自动诠释学挪用。总体而言,我的论文应扩展对机器学习系统引发的伦理问题的分析,这些问题本质上应被视为认知问题,从而有助于弥合当前辩论中这两个维度之间的差距。