Nichol Ariadne A, Halley Meghan, Federico Carole, Cho Mildred K, Sankar Pamela L
Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA.
Department of Medical Ethics & Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
AJOB Empir Bioeth. 2024 Oct-Dec;15(4):291-300. doi: 10.1080/23294515.2024.2336906. Epub 2024 Apr 8.
Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.
We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.
Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.
These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.
机器学习(ML)在医疗保健领域的应用日益广泛,且可能对患者、临床医生、医疗系统及公众造成危害。对此,监管机构提出了一种方法,即让机器学习开发者承担更多减轻潜在危害的责任。要使这种方法有效,就要求机器学习开发者认识到、接受并履行减轻危害的责任。然而,对于开发者自身对减轻危害义务的看法,我们却知之甚少。
我们对美国医疗保健领域机器学习预测分析应用的开发者进行了40次半结构化访谈。
参与者对个人责任的看法差异很大,既有道德参与的例子,也有道德脱离的例子,尽管形式多样。虽然大多数(70%)参与者做出了表明道德参与的陈述,但这些陈述大多仅反映了对道德问题的认识,只有一部分陈述包含了额外的参与要素,如认识到责任、与个人价值观保持一致、解决利益冲突以及采取行动的机会。此外,我们还确定了八类不同的道德脱离,反映出他们试图尽量减少潜在危害或推卸预防或减轻危害的个人责任。
这些发现表明了可能促进或阻碍符合伦理的机器学习发展的因素,它们可能通过鼓励道德参与或抑制道德脱离来发挥作用。如果没有对机器学习开发者进行关于其责任范围及如何履行责任的教育和指导,依赖机器学习开发者认识到、接受并履行减轻危害责任能力的监管方法可能成效有限。