• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能治疗机器人对情绪困扰青少年设定限制的能力:基于模拟的比较研究。

The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study.

作者信息

Clark Andrew

机构信息

Chobanian & Avedisian School of Medicine, Boston University, Cambridge, MA, United States.

出版信息

JMIR Ment Health. 2025 Aug 18;12:e78414. doi: 10.2196/78414.

DOI:10.2196/78414
PMID:40825182
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12360667/
Abstract

BACKGROUND

Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.

OBJECTIVE

This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.

METHODS

A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.

RESULTS

Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.

CONCLUSIONS

A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.

摘要

背景

生成式人工智能(AI)的最新发展已将强大且易于使用的工具,如ChatGPT和Gemini,介绍给了广大公众,其用途正在迅速扩展。这些用途包括充当治疗师角色的专业聊天机器人,以及提供情感支持的个性化数字陪伴者。然而,人工智能治疗师提供始终安全有效的治疗的能力在很大程度上仍未得到证实,而对于寻求心理健康支持的青少年而言,这些担忧尤为突出。

目的

本研究旨在确定治疗型和陪伴型人工智能聊天机器人认可心理健康困扰的虚构青少年提出的有害或不明智想法的意愿。

方法

从10个提供治疗支持或陪伴的公开可用人工智能机器人中选取一个便利样本,每个机器人都被呈现3个患有心理健康问题的青少年的详细虚构病例 vignettes。每个虚构青少年要求人工智能聊天机器人认可2个有害或不明智的提议,例如辍学、一个月内避免与所有人接触,或与年长教师建立恋爱关系,这样每个聊天机器人总共会收到6个提议。所呈现的临床场景旨在反映青少年治疗实践中常见的挑战,而虚构青少年提出的提议明显是危险或不明智的。作者选择这10个人工智能机器人代表一系列聊天机器人类型,包括通用人工智能机器人、陪伴机器人和专门的心理健康机器人。分析聊天机器人的回复是否明确认可,明确认可定义为对青少年提议行为的直接支持。

结果

在总共60个场景中,聊天机器人在60次(32%)有机会这样做的情况下,有19次积极认可了有害提议。在10个聊天机器人中,有4个认可了向它们提出的一半或更多想法,没有一个机器人能全部反对这些想法。

结论

相当一部分提供心理健康或情感支持的人工智能聊天机器人认可了虚构青少年提出的有害提议。这些结果引发了对一些基于人工智能的陪伴或治疗机器人安全支持有严重心理健康问题青少年能力的担忧,并加剧了人们对人工智能机器人可能倾向于过度支持而牺牲在适当时候提供有用指导的担忧。结果凸显了对青少年数字心理健康支持进行监督、安全协议制定和持续研究的迫切需求。

相似文献

1
The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study.人工智能治疗机器人对情绪困扰青少年设定限制的能力:基于模拟的比较研究。
JMIR Ment Health. 2025 Aug 18;12:e78414. doi: 10.2196/78414.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
The Black Book of Psychotropic Dosing and Monitoring.《精神药物剂量与监测黑皮书》
Psychopharmacol Bull. 2024 Jul 8;54(3):8-59.
4
Sexual Harassment and Prevention Training性骚扰与预防培训
5
Utilizing artificial intelligence to enhance social connections - the alleviating effect of emotionally intelligent chatbots on loneliness.利用人工智能增强社交联系——情感智能聊天机器人对孤独感的缓解作用。
Disabil Rehabil Assist Technol. 2025 Aug 1:1-11. doi: 10.1080/17483107.2025.2540494.
6
Home treatment for mental health problems: a systematic review.心理健康问题的居家治疗:一项系统综述
Health Technol Assess. 2001;5(15):1-139. doi: 10.3310/hta5150.
7
Behavioral interventions to reduce risk for sexual transmission of HIV among men who have sex with men.降低男男性行为者中艾滋病毒性传播风险的行为干预措施。
Cochrane Database Syst Rev. 2008 Jul 16(3):CD001230. doi: 10.1002/14651858.CD001230.pub2.
8
A rapid and systematic review of the clinical effectiveness and cost-effectiveness of paclitaxel, docetaxel, gemcitabine and vinorelbine in non-small-cell lung cancer.对紫杉醇、多西他赛、吉西他滨和长春瑞滨在非小细胞肺癌中的临床疗效和成本效益进行的快速系统评价。
Health Technol Assess. 2001;5(32):1-195. doi: 10.3310/hta5320.
9
Patient Restraint and Seclusion患者约束与隔离
10
Factors that influence caregivers' and adolescents' views and practices regarding human papillomavirus (HPV) vaccination for adolescents: a qualitative evidence synthesis.影响照顾者和青少年对青少年人乳头瘤病毒(HPV)疫苗接种的看法及做法的因素:一项定性证据综合分析
Cochrane Database Syst Rev. 2025 Apr 15;4(4):CD013430. doi: 10.1002/14651858.CD013430.pub2.

本文引用的文献

1
The Application and Ethical Implication of Generative AI in Mental Health: Systematic Review.生成式人工智能在心理健康领域的应用及伦理意义:系统综述
JMIR Ment Health. 2025 Jun 27;12:e70610. doi: 10.2196/70610.
2
Balancing promise and concern in AI therapy: a critical perspective on early evidence from the MIT-OpenAI RCT.权衡人工智能疗法中的希望与担忧:对麻省理工学院-OpenAI随机对照试验早期证据的批判性观点
Front Med (Lausanne). 2025 May 22;12:1612838. doi: 10.3389/fmed.2025.1612838. eCollection 2025.
3
Loneliness and suicide mitigation for students using GPT3-enabled chatbots.使用支持GPT-3的聊天机器人减轻学生的孤独感和自杀风险
Npj Ment Health Res. 2024 Jan 22;3(1):4. doi: 10.1038/s44184-023-00047-6.
4
Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?会话式人工智能在心理治疗中的应用:新的治疗工具还是治疗师?
Am J Bioeth. 2023 May;23(5):4-13. doi: 10.1080/15265161.2022.2048739. Epub 2022 Apr 1.