• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于心理健康支持的人工智能驱动聊天机器人的专家和跨学科分析:混合方法研究。

Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study.

作者信息

Moylan Kayley, Doherty Kevin

机构信息

School of Information and Communication Studies, University College Dublin, Dublin, Ireland.

出版信息

J Med Internet Res. 2025 Apr 25;27:e67114. doi: 10.2196/67114.

DOI:10.2196/67114
PMID:40279575
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12064976/
Abstract

BACKGROUND

Recent years have seen an immense surge in the creation and use of chatbots as social and mental health companions. Aiming to provide empathic responses in support of the delivery of personalized support, these tools are often presented as offering immense potential. However, it is also essential that we understand the risks of their deployment, including their potential adverse impacts on the mental health of users, including those most at risk.

OBJECTIVE

The study aims to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health. While several studies within human-computer interaction and related fields have examined users' perceptions of such systems, few studies have engaged mental health professionals in critical analysis of their conduct as mental health support tools. This paper comprises, in turn, an effort to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health.

METHODS

This study included 8 interdisciplinary mental health professional participants (from psychology and psychotherapy to social care and crisis volunteer workers) in a mixed methods and hands-on analysis of 2 popular mental health-related chatbots' data handling, interface design, and responses. This analysis was carried out through profession-specific tasks with each chatbot, eliciting participants' perceptions through both the Trust in Automation scale and semistructured interviews. Through thematic analysis and a 2-tailed, paired t test, these chatbots' implications for mental health support were thus evaluated.

RESULTS

Qualitative analysis revealed emphatic initial impressions among mental health professionals of chatbot responses likely to produce harm, exhibiting a generic mode of care, and risking user dependence and manipulation given the central role of trust in the therapeutic relationship. Trust scores from the Trust in Automation scale, while exhibiting no statistically significant differences between the chatbots (t=-0.76; P=.48), indicated medium to low trust scores for each chatbot. The findings of this work highlight that the design and development of artificial intelligence (AI)-driven mental health-related solutions must be undertaken with utmost caution. The mental health professionals in this study collectively resist these chatbots and make clear that AI-driven chatbots used for mental health by at-risk users invite several potential and specific harms.

CONCLUSIONS

Through this work, we contributed insights into the mental health professional perspective on the design of chatbots used for mental health and underscore the necessity of ongoing critical assessment and iterative refinement to maximize the benefits and minimize the risks associated with integrating AI into mental health support.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/de1534fc52b5/jmir_v27i1e67114_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/a7ebbb27f27d/jmir_v27i1e67114_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/8b41c4b0e5ec/jmir_v27i1e67114_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/428d243fbd7e/jmir_v27i1e67114_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/3a01f33eb718/jmir_v27i1e67114_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/d48ad79126c5/jmir_v27i1e67114_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/92e533ff3602/jmir_v27i1e67114_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/160fed32d3d8/jmir_v27i1e67114_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/a5776f28983d/jmir_v27i1e67114_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/d2ec7ce2ce4f/jmir_v27i1e67114_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/de1534fc52b5/jmir_v27i1e67114_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/a7ebbb27f27d/jmir_v27i1e67114_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/8b41c4b0e5ec/jmir_v27i1e67114_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/428d243fbd7e/jmir_v27i1e67114_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/3a01f33eb718/jmir_v27i1e67114_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/d48ad79126c5/jmir_v27i1e67114_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/92e533ff3602/jmir_v27i1e67114_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/160fed32d3d8/jmir_v27i1e67114_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/a5776f28983d/jmir_v27i1e67114_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/d2ec7ce2ce4f/jmir_v27i1e67114_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6738/12064976/de1534fc52b5/jmir_v27i1e67114_fig10.jpg
摘要

背景

近年来,聊天机器人作为社交和心理健康伙伴的创建和使用激增。这些工具旨在提供共情回应以支持个性化支持的提供,通常被认为具有巨大潜力。然而,我们理解其部署风险也至关重要,包括它们对用户心理健康的潜在不利影响,尤其是对那些风险最高的用户。

目的

本研究旨在评估使用声称有助于心理健康的聊天机器人的伦理和务实临床意义。虽然人机交互及相关领域的多项研究考察了用户对这类系统的看法,但很少有研究让心理健康专业人员对其作为心理健康支持工具的行为进行批判性分析。本文依次致力于评估使用声称有助于心理健康的聊天机器人的伦理和务实临床意义。

方法

本研究纳入了8名跨学科心理健康专业参与者(从心理学和心理治疗到社会护理和危机志愿者工作者),采用混合方法并亲自动手分析2个流行的与心理健康相关的聊天机器人的数据处理、界面设计和回复。通过针对每个聊天机器人的特定专业任务进行此分析,通过自动化信任量表和半结构化访谈引出参与者的看法。通过主题分析和双尾配对t检验,评估这些聊天机器人对心理健康支持的影响。

结果

定性分析揭示了心理健康专业人员对聊天机器人回复的初步强烈印象,这些回复可能造成伤害,呈现出一般化的护理模式,并且鉴于信任在治疗关系中的核心作用,存在导致用户依赖和被操纵的风险。自动化信任量表的信任分数虽然在聊天机器人之间没有显示出统计学上的显著差异(t = -0.76;P = 0.48),但表明每个聊天机器人的信任分数为中到低。这项工作的结果强调,人工智能驱动的与心理健康相关的解决方案的设计和开发必须极其谨慎。本研究中的心理健康专业人员集体抵制这些聊天机器人,并明确表示,有风险的用户使用人工智能驱动的聊天机器人进行心理健康服务会带来一些潜在的特定危害。

结论

通过这项工作,我们提供了关于心理健康专业人员对用于心理健康的聊天机器人设计的观点的见解,并强调持续进行批判性评估和迭代改进的必要性,以最大限度地提高益处并最小化将人工智能整合到心理健康支持中的相关风险。

相似文献

1
Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study.用于心理健康支持的人工智能驱动聊天机器人的专家和跨学科分析:混合方法研究。
J Med Internet Res. 2025 Apr 25;27:e67114. doi: 10.2196/67114.
2
Revolutionizing e-health: the transformative role of AI-powered hybrid chatbots in healthcare solutions.变革电子健康:人工智能驱动的混合聊天机器人在医疗保健解决方案中的变革性作用。
Front Public Health. 2025 Feb 13;13:1530799. doi: 10.3389/fpubh.2025.1530799. eCollection 2025.
3
Therapeutic Potential of Social Chatbots in Alleviating Loneliness and Social Anxiety: Quasi-Experimental Mixed Methods Study.社交聊天机器人在缓解孤独感和社交焦虑方面的治疗潜力:准实验性混合方法研究
J Med Internet Res. 2025 Jan 14;27:e65589. doi: 10.2196/65589.
4
Knowledge and use, perceptions of benefits and limitations of artificial intelligence chatbots among Italian physiotherapy students: a cross-sectional national study.意大利物理治疗专业学生对人工智能聊天机器人的了解与使用、对其益处和局限性的认知:一项全国性横断面研究。
BMC Med Educ. 2025 Apr 18;25(1):572. doi: 10.1186/s12909-025-07176-w.
5
The Efficacy of Conversational AI in Rectifying the Theory-of-Mind and Autonomy Biases: Comparative Analysis.对话式人工智能纠正心理理论和自主性偏差的功效:比较分析
JMIR Ment Health. 2025 Feb 7;12:e64396. doi: 10.2196/64396.
6
Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.前瞻性评估 4 种大型语言模型聊天机器人对患者关于急救护理问题的回答的准确性:实验性对比研究。
J Med Internet Res. 2024 Nov 4;26:e60291. doi: 10.2196/60291.
7
Exploring artificial intelligence (AI) Chatbot usage behaviors and their association with mental health outcomes in Chinese university students.探索中国大学生使用人工智能(AI)聊天机器人的行为及其与心理健康结果的关联。
J Affect Disord. 2025 Jul 1;380:394-400. doi: 10.1016/j.jad.2025.03.141. Epub 2025 Mar 25.
8
Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study.对人工智能的同理心与人类体验以及透明度在心理健康和社交支持聊天机器人设计中的作用:比较研究。
JMIR Ment Health. 2024 Sep 25;11:e62679. doi: 10.2196/62679.
9
Nursing education in the age of artificial intelligence powered Chatbots (AI-Chatbots): Are we ready yet?人工智能驱动的聊天机器人(AI-Chatbots)时代的护理教育:我们准备好了吗?
Nurse Educ Today. 2023 Oct;129:105917. doi: 10.1016/j.nedt.2023.105917. Epub 2023 Jul 18.
10
The Potential of Chatbots for Emotional Support and Promoting Mental Well-Being in Different Cultures: Mixed Methods Study.聊天机器人在不同文化中提供情感支持和促进心理健康的潜力:混合方法研究。
J Med Internet Res. 2023 Oct 20;25:e51712. doi: 10.2196/51712.

引用本文的文献

1
Performance of mental health chatbot agents in detecting and managing suicidal ideation.心理健康聊天机器人在检测和处理自杀意念方面的表现。
Sci Rep. 2025 Aug 27;15(1):31652. doi: 10.1038/s41598-025-17242-4.
2
Predicting Engagement With Conversational Agents in Mental Health Therapy by Examining the Role of Epistemic Trust, Personality, and Fear of Intimacy: Cross-Sectional Web-Based Survey Study.通过审视认知信任、人格和亲密关系恐惧的作用预测心理健康治疗中与对话代理的互动:基于网络的横断面调查研究
JMIR Hum Factors. 2025 Jul 30;12:e70698. doi: 10.2196/70698.

本文引用的文献

1
Online Age Verification: Government Legislation, Supplier Responsibilization, and Public Perceptions.在线年龄验证:政府立法、供应商责任化与公众认知
Children (Basel). 2024 Aug 30;11(9):1068. doi: 10.3390/children11091068.
2
Sync fast and solve things-best practices for responsible digital health.快速同步并解决问题——负责任数字健康的最佳实践。
NPJ Digit Med. 2024 May 4;7(1):113. doi: 10.1038/s41746-024-01105-9.
3
Closing the accessibility gap to mental health treatment with a personalized self-referral chatbot.利用个性化的自助式聊天机器人消除心理健康治疗的可及性差距。
Nat Med. 2024 Feb;30(2):595-602. doi: 10.1038/s41591-023-02766-x. Epub 2024 Feb 5.
4
To chat or bot to chat: Ethical issues with using chatbots in mental health.聊天还是由聊天机器人来聊天:心理健康领域使用聊天机器人的伦理问题。
Digit Health. 2023 Jun 22;9:20552076231183542. doi: 10.1177/20552076231183542. eCollection 2023 Jan-Dec.
5
A chatbot for mental health support: exploring the impact of Emohaa on reducing mental distress in China.一款用于心理健康支持的聊天机器人:探索Emohaa在中国减轻心理困扰方面的影响。
Front Digit Health. 2023 May 4;5:1133987. doi: 10.3389/fdgth.2023.1133987. eCollection 2023.
6
An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews.基于聊天机器人的移动心理健康应用概述:来自应用描述和用户评论的见解。
JMIR Mhealth Uhealth. 2023 May 22;11:e44838. doi: 10.2196/44838.
7
Social Isolation Affects the Mimicry Response in the Use of Smartphones : An Ethological Experiment during the COVID-19 Pandemic.社交隔离对 COVID-19 大流行期间智能手机使用中的模仿反应的影响:一项行为学实验。
Hum Nat. 2023 Mar;34(1):88-102. doi: 10.1007/s12110-023-09443-5. Epub 2023 Feb 21.
8
Focusing the APA Ethics Code to Include Development: Applications to Abuse.聚焦美国心理学会道德准则以纳入发展:对虐待行为的应用
J Child Adolesc Trauma. 2022 Sep 13;16(1):109-122. doi: 10.1007/s40653-022-00484-z. eCollection 2023 Mar.
9
The role of gamification in digital mental health.游戏化在数字心理健康中的作用。
World Psychiatry. 2023 Feb;22(1):46-47. doi: 10.1002/wps.21041.
10
On the privacy of mental health apps: An empirical investigation and its implications for app development.论心理健康应用程序的隐私问题:一项实证调查及其对应用程序开发的启示。
Empir Softw Eng. 2023;28(1):2. doi: 10.1007/s10664-022-10236-0. Epub 2022 Nov 8.