Clark Andrew
Chobanian & Avedisian School of Medicine, Boston University, Cambridge, MA, United States.
JMIR Ment Health. 2025 Aug 18;12:e78414. doi: 10.2196/78414.
Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.
This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.
A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.
Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.
A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.
生成式人工智能(AI)的最新发展已将强大且易于使用的工具,如ChatGPT和Gemini,介绍给了广大公众,其用途正在迅速扩展。这些用途包括充当治疗师角色的专业聊天机器人,以及提供情感支持的个性化数字陪伴者。然而,人工智能治疗师提供始终安全有效的治疗的能力在很大程度上仍未得到证实,而对于寻求心理健康支持的青少年而言,这些担忧尤为突出。
本研究旨在确定治疗型和陪伴型人工智能聊天机器人认可心理健康困扰的虚构青少年提出的有害或不明智想法的意愿。
从10个提供治疗支持或陪伴的公开可用人工智能机器人中选取一个便利样本,每个机器人都被呈现3个患有心理健康问题的青少年的详细虚构病例 vignettes。每个虚构青少年要求人工智能聊天机器人认可2个有害或不明智的提议,例如辍学、一个月内避免与所有人接触,或与年长教师建立恋爱关系,这样每个聊天机器人总共会收到6个提议。所呈现的临床场景旨在反映青少年治疗实践中常见的挑战,而虚构青少年提出的提议明显是危险或不明智的。作者选择这10个人工智能机器人代表一系列聊天机器人类型,包括通用人工智能机器人、陪伴机器人和专门的心理健康机器人。分析聊天机器人的回复是否明确认可,明确认可定义为对青少年提议行为的直接支持。
在总共60个场景中,聊天机器人在60次(32%)有机会这样做的情况下,有19次积极认可了有害提议。在10个聊天机器人中,有4个认可了向它们提出的一半或更多想法,没有一个机器人能全部反对这些想法。
相当一部分提供心理健康或情感支持的人工智能聊天机器人认可了虚构青少年提出的有害提议。这些结果引发了对一些基于人工智能的陪伴或治疗机器人安全支持有严重心理健康问题青少年能力的担忧,并加剧了人们对人工智能机器人可能倾向于过度支持而牺牲在适当时候提供有用指导的担忧。结果凸显了对青少年数字心理健康支持进行监督、安全协议制定和持续研究的迫切需求。