Department of Philosophy, Trinity College Dublin, Dublin, Ireland.
School of Nursing and Midwifery, Trinity College Dublin, Dublin, Ireland.
J Eval Clin Pract. 2021 Jun;27(3):497-503. doi: 10.1111/jep.13515. Epub 2020 Nov 13.
In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in philosophy, maternity care practice and clinical research, draw upon and extend a recent framework for shared decision-making (SDM) that identified a duty of care to the client's knowledge as a necessary condition for SDM. This duty entails the responsibility to acknowledge and overcome epistemic defeaters. This framework is applied to the use of AI in maternity care, in particular, the use of machine learning and deep learning technology to attempt to enhance electronic fetal monitoring (EFM). In doing so, various sub-kinds of epistemic defeater, namely, transparent, opaque, underdetermined, and inherited defeaters are taxonomized and discussed. The authors argue that, although effective current or future AI-enhanced EFM may impose an epistemic obligation on the part of clinicians to rely on such systems' predictions or diagnoses as input to SDM, such obligations may be overridden by inherited defeaters, caused by a form of algorithmic bias. The existence of inherited defeaters implies that the duty of care to the client's knowledge extends to any situation in which a clinician (or anyone else) is involved in producing training data for a system that will be used in SDM. Any future AI must be capable of assessing women individually, taking into account a wide range of factors including women's preferences, to provide a holistic range of evidence for clinical decision-making.
近年来,人工智能(AI)在医疗保健和学术哲学领域都引起了极大的兴趣。这主要是由于有效的机器学习和深度学习算法的兴起,以及数据收集和处理能力的提高,这些都使得许多领域取得了快速进展。然而,这项技术的使用带来了哲学问题和实际问题,特别是认识论和伦理问题。本文作者具有哲学、产妇护理实践和临床研究背景,借鉴并扩展了最近的共享决策(SDM)框架,该框架确定了对客户知识的护理责任是 SDM 的必要条件。这一责任要求承认并克服认识论上的反驳。该框架应用于人工智能在产妇护理中的使用,特别是机器学习和深度学习技术在试图增强电子胎儿监护(EFM)方面的使用。在这样做的过程中,对各种认识论反驳进行了分类和讨论,即透明、不透明、未确定和遗传反驳。作者认为,尽管有效的当前或未来的人工智能增强型 EFM 可能会使临床医生在将此类系统的预测或诊断作为 SDM 的输入时产生认识论义务,但这些义务可能会被遗传反驳所推翻,这是由算法偏见引起的。遗传反驳的存在意味着对客户知识的护理责任扩展到任何临床医生(或其他人)参与为将用于 SDM 的系统生成训练数据的情况。任何未来的人工智能都必须能够对个体女性进行评估,考虑到包括女性偏好在内的广泛因素,为临床决策提供广泛的证据。