Department of Psychiatry, Amsterdam Public Health, Mental Health program, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
Department of Ethics, Law, & Humanities, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
Bioethics. 2024 Jul;38(6):503-510. doi: 10.1111/bioe.13299. Epub 2024 May 12.
Mental health chatbots (MHCBs) designed to support individuals in coping with mental health issues are rapidly advancing. Currently, these MHCBs are predominantly used in commercial rather than clinical contexts, but this might change soon. The question is whether this use is ethically desirable. This paper addresses a critical yet understudied concern: assuming that MHCBs cannot have genuine emotions, how this assumption may affect psychotherapy, and consequently the quality of treatment outcomes. We argue that if MHCBs lack emotions, they cannot have genuine (affective) empathy or utilise countertransference. Consequently, this gives reason to worry that MHCBs are (a) more liable to harm and (b) less likely to benefit patients than human therapists. We discuss some responses to this worry and conclude that further empirical research is necessary to determine whether these worries are valid. We conclude that, even if these worries are valid, it does not mean that we should never use MHCBs. By discussing the broader ethical debate on the clinical use of chatbots, we point towards how further research can help us establish ethical boundaries for how we should use mental health chatbots.
旨在帮助人们应对心理健康问题的心理健康聊天机器人(MHCB)正在迅速发展。目前,这些 MHCB 主要用于商业而非临床环境,但这种情况可能很快会改变。问题是,这种使用在伦理上是否可取。本文探讨了一个关键但研究不足的问题:假设 MHCB 没有真正的情感,这种假设如何影响心理治疗,以及由此产生的治疗效果质量。我们认为,如果 MHCB 缺乏情感,它们就不能有真正的(情感)同理心或利用反移情。因此,这有理由让人担心 MHCB 比人类治疗师更容易(a)造成伤害,(b)对患者无益。我们讨论了一些对这种担忧的回应,并得出结论,需要进一步的实证研究来确定这些担忧是否合理。我们得出结论,即使这些担忧是合理的,这并不意味着我们不应该使用 MHCB。通过讨论关于聊天机器人在临床使用的更广泛的伦理争论,我们指出进一步的研究如何帮助我们为如何使用心理健康聊天机器人建立伦理界限。