Rahsepar Meadi Mehrdad, Sillekens Tomas, Metselaar Suzanne, van Balkom Anton, Bernstein Justin, Batelaan Neeltje
Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
JMIR Ment Health. 2025 Feb 21;12:e60432. doi: 10.2196/60432.
Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.
We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.
We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.
We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles.
Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.
对话式人工智能(CAI)正在成为一种有前景的用于精神卫生保健的数字技术。诸如心理治疗聊天机器人之类的CAI应用程序在应用商店中可用,但其使用引发了伦理问题。
我们旨在全面概述围绕CAI作为有心理健康问题个体的治疗师所涉及的伦理考量。
我们在PubMed、Embase、美国心理学会心理学文摘数据库(APA PsycINFO)、科学引文索引(Web of Science)、Scopus、《哲学家索引》(Philosopher's Index)和美国计算机协会数字图书馆数据库(ACM Digital Library)中进行了系统检索。我们的检索包含三个要素:具身人工智能、伦理和心理健康。我们将CAI定义为与个人进行交互并使用人工智能来形成输出的对话代理。我们纳入了讨论CAI在作为有心理健康问题个体的治疗师角色中发挥作用时的伦理挑战的文章。我们通过滚雪球检索补充了其他文章。我们纳入英文或荷兰文的文章。除了研讨会摘要外,所有类型的文章都予以考虑。由两名独立研究人员(MRM和TS或AvB)进行资格筛选。基于预期的考量创建了初始图表形式,并在图表编制过程中进行了修订和补充。将伦理挑战分为不同主题。当一个问题在两篇以上文章中出现时,我们将其确定为一个独特的主题。
我们纳入了101篇文章,其中95%(n = 96)发表于2018年或之后。大多数是综述(n = 22,21.8%),其次是评论(n = 17,16.8%)。区分出以下10个主题:(1)安全与伤害(在52/101篇文章中讨论,占51.5%);该主题中最常见的话题是自杀倾向和危机管理、有害或错误建议以及对CAI的依赖风险;(2)可解释性、透明度和信任(n = 26,25.7%),包括诸如“黑箱”算法对信任的影响等主题;(3)责任与问责(n = 31,30.7%);(4)同理心与人性(n = 29,28.7%);(5)正义(n = 41,40.6%),包括诸如由于数字素养差异导致的健康不平等之类的主题;(6)拟人化与欺骗(n = 24,23.8%);(7)自主性(n = 12,11.9%);(8)有效性(n = 38,37.6%);(9)隐私与保密(n = 62,61.4%);以及(10)对医护人员工作的担忧(n = 16,15.8%)。其他主题在已识别文章的9.9%(n = 10)中被讨论。
我们的范围综述全面涵盖了CAI在精神卫生保健中的伦理方面。虽然某些主题仍未得到充分探索,且利益相关者的观点未得到充分体现,但本研究突出了需要进一步研究的关键领域。这些领域包括评估与人类治疗师相比CAI的风险和益处,确定其在治疗环境中的适当角色及其对获得护理的影响,以及解决问责问题。填补这些空白可为规范性分析提供信息,并指导制定关于在精神卫生保健中负责任使用CAI的伦理准则。