• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索心理健康护理中对话式人工智能的伦理挑战:范围审查

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.

作者信息

Rahsepar Meadi Mehrdad, Sillekens Tomas, Metselaar Suzanne, van Balkom Anton, Bernstein Justin, Batelaan Neeltje

机构信息

Department of Psychiatry, Amsterdam Public Health, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.

Department of Ethics, Law, & Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.

出版信息

JMIR Ment Health. 2025 Feb 21;12:e60432. doi: 10.2196/60432.

DOI:10.2196/60432
PMID:39983102
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11890142/
Abstract

BACKGROUND

Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.

OBJECTIVE

We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.

METHODS

We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.

RESULTS

We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles.

CONCLUSIONS

Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a1e3/11890142/664ea7ef79ce/mental_v12i1e60432_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a1e3/11890142/664ea7ef79ce/mental_v12i1e60432_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a1e3/11890142/664ea7ef79ce/mental_v12i1e60432_fig1.jpg
摘要

背景

对话式人工智能(CAI)正在成为一种有前景的用于精神卫生保健的数字技术。诸如心理治疗聊天机器人之类的CAI应用程序在应用商店中可用,但其使用引发了伦理问题。

目的

我们旨在全面概述围绕CAI作为有心理健康问题个体的治疗师所涉及的伦理考量。

方法

我们在PubMed、Embase、美国心理学会心理学文摘数据库(APA PsycINFO)、科学引文索引(Web of Science)、Scopus、《哲学家索引》(Philosopher's Index)和美国计算机协会数字图书馆数据库(ACM Digital Library)中进行了系统检索。我们的检索包含三个要素:具身人工智能、伦理和心理健康。我们将CAI定义为与个人进行交互并使用人工智能来形成输出的对话代理。我们纳入了讨论CAI在作为有心理健康问题个体的治疗师角色中发挥作用时的伦理挑战的文章。我们通过滚雪球检索补充了其他文章。我们纳入英文或荷兰文的文章。除了研讨会摘要外,所有类型的文章都予以考虑。由两名独立研究人员(MRM和TS或AvB)进行资格筛选。基于预期的考量创建了初始图表形式,并在图表编制过程中进行了修订和补充。将伦理挑战分为不同主题。当一个问题在两篇以上文章中出现时,我们将其确定为一个独特的主题。

结果

我们纳入了101篇文章,其中95%(n = 96)发表于2018年或之后。大多数是综述(n = 22,21.8%),其次是评论(n = 17,16.8%)。区分出以下10个主题:(1)安全与伤害(在52/101篇文章中讨论,占51.5%);该主题中最常见的话题是自杀倾向和危机管理、有害或错误建议以及对CAI的依赖风险;(2)可解释性、透明度和信任(n = 26,25.7%),包括诸如“黑箱”算法对信任的影响等主题;(3)责任与问责(n = 31,30.7%);(4)同理心与人性(n = 29,28.7%);(5)正义(n = 41,40.6%),包括诸如由于数字素养差异导致的健康不平等之类的主题;(6)拟人化与欺骗(n = 24,23.8%);(7)自主性(n = 12,11.9%);(8)有效性(n = 38,37.6%);(9)隐私与保密(n = 62,61.4%);以及(10)对医护人员工作的担忧(n = 16,15.8%)。其他主题在已识别文章的9.9%(n = 10)中被讨论。

结论

我们的范围综述全面涵盖了CAI在精神卫生保健中的伦理方面。虽然某些主题仍未得到充分探索,且利益相关者的观点未得到充分体现,但本研究突出了需要进一步研究的关键领域。这些领域包括评估与人类治疗师相比CAI的风险和益处,确定其在治疗环境中的适当角色及其对获得护理的影响,以及解决问责问题。填补这些空白可为规范性分析提供信息,并指导制定关于在精神卫生保健中负责任使用CAI的伦理准则。

相似文献

1
Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.探索心理健康护理中对话式人工智能的伦理挑战:范围审查
JMIR Ment Health. 2025 Feb 21;12:e60432. doi: 10.2196/60432.
2
Impact of Responsible AI on the Occurrence and Resolution of Ethical Issues: Protocol for a Scoping Review.负责任的人工智能对伦理问题的发生和解决的影响:范围综述方案。
JMIR Res Protoc. 2024 Jun 5;13:e52349. doi: 10.2196/52349.
3
Data stewardship and curation practices in AI-based genomics and automated microscopy image analysis for high-throughput screening studies: promoting robust and ethical AI applications.基于人工智能的基因组学和用于高通量筛选研究的自动显微镜图像分析中的数据管理与整理实践:推动可靠且符合伦理的人工智能应用。
Hum Genomics. 2025 Feb 23;19(1):16. doi: 10.1186/s40246-025-00716-x.
4
Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy.您的机器人治疗师现在为您服务:具身人工智能在精神病学、心理学和心理治疗中的伦理意义。
J Med Internet Res. 2019 May 9;21(5):e13216. doi: 10.2196/13216.
5
Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia.超越黑木树:影响澳大利亚地区、农村和偏远地区的健康研究问题的快速综述。
Med J Aust. 2020 Dec;213 Suppl 11:S3-S32.e1. doi: 10.5694/mja2.50881.
6
Ethical considerations for artificial intelligence in dermatology: a scoping review.人工智能在皮肤科应用的伦理考量:范围综述。
Br J Dermatol. 2024 May 17;190(6):789-797. doi: 10.1093/bjd/ljae040.
7
A Comparison of Responses from Human Therapists and Large Language Model-Based Chatbots to Assess Therapeutic Communication: Mixed Methods Study.比较人类治疗师和基于大语言模型的聊天机器人的反应以评估治疗性沟通:混合方法研究
JMIR Ment Health. 2025 May 21;12:e69709. doi: 10.2196/69709.
8
Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?会话式人工智能在心理治疗中的应用:新的治疗工具还是治疗师?
Am J Bioeth. 2023 May;23(5):4-13. doi: 10.1080/15265161.2022.2048739. Epub 2022 Apr 1.
9
Ethical implications of artificial intelligence in skin cancer diagnostics: use-case analyses.人工智能在皮肤癌诊断中的伦理意义:用例分析
Br J Dermatol. 2025 Feb 18;192(3):520-529. doi: 10.1093/bjd/ljae434.
10
The application of artificial intelligence in the field of mental health: a systematic review.人工智能在心理健康领域的应用:一项系统综述。
BMC Psychiatry. 2025 Feb 14;25(1):132. doi: 10.1186/s12888-025-06483-2.

引用本文的文献

1
Beyond the Bot: A Dual-Phase Framework for Evaluating AI Chatbot Simulations in Nursing Education.超越聊天机器人:护理教育中评估人工智能聊天机器人模拟的双阶段框架。
Nurs Rep. 2025 Jul 31;15(8):280. doi: 10.3390/nursrep15080280.
2
Perception of AI Use in Youth Mental Health Services: Qualitative Study.青少年心理健康服务中人工智能应用的认知:定性研究
J Particip Med. 2025 Aug 19;17:e69449. doi: 10.2196/69449.
3
Leveraging Large Language Models for Simulated Psychotherapy Client Interactions: Development and Usability Study of Client101.

本文引用的文献

1
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis.大型语言模型的人性化和稳健性在抑郁症患者对话人工智能中的作用:批判性分析。
JMIR Ment Health. 2024 Jul 2;11:e56569. doi: 10.2196/56569.
2
Large language models in psychiatry: Opportunities and challenges.精神医学中的大语言模型:机遇与挑战。
Psychiatry Res. 2024 Sep;339:116026. doi: 10.1016/j.psychres.2024.116026. Epub 2024 Jun 11.
3
The Machine Speaks: Conversational AI and the Importance of Effort to Relationships of Meaning.
利用大语言模型进行模拟心理治疗客户互动:Client101的开发与可用性研究
JMIR Med Educ. 2025 Jul 31;11:e68056. doi: 10.2196/68056.
4
Predicting Engagement With Conversational Agents in Mental Health Therapy by Examining the Role of Epistemic Trust, Personality, and Fear of Intimacy: Cross-Sectional Web-Based Survey Study.通过审视认知信任、人格和亲密关系恐惧的作用预测心理健康治疗中与对话代理的互动:基于网络的横断面调查研究
JMIR Hum Factors. 2025 Jul 30;12:e70698. doi: 10.2196/70698.
5
Mental Health Clinicians as Advocates for Effective, Equitable, Accessible, and Safe Digital Mental Health Services.心理健康临床医生作为有效、公平、可及且安全的数字心理健康服务的倡导者。
Focus (Am Psychiatr Publ). 2025 Jul;23(3):307-313. doi: 10.1176/appi.focus.20250001. Epub 2025 Jul 1.
6
Telepsychiatry and Artificial Intelligence: A Structured Review of Emerging Approaches to Accessible Psychiatric Care.远程精神病学与人工智能:对可及性精神科护理新兴方法的结构化综述
Healthcare (Basel). 2025 Jun 5;13(11):1348. doi: 10.3390/healthcare13111348.
7
Conversational AI in Pediatric Mental Health: A Narrative Review.儿科心理健康中的对话式人工智能:叙事性综述
Children (Basel). 2025 Mar 14;12(3):359. doi: 10.3390/children12030359.
机器说话了:对话式人工智能与有意义关系中的努力的重要性。
JMIR Ment Health. 2024 Jun 18;11:e53203. doi: 10.2196/53203.
4
The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy.人工之三:引入生成式人工智能对心理治疗影响的广泛视角。
JMIR Ment Health. 2024 May 23;11:e54781. doi: 10.2196/54781.
5
Does a lack of emotions make chatbots unfit to be psychotherapists?缺乏情感是否使聊天机器人不适合成为心理治疗师?
Bioethics. 2024 Jul;38(6):503-510. doi: 10.1111/bioe.13299. Epub 2024 May 12.
6
[ChatGPT in mental health care].[精神卫生保健中的ChatGPT]
Tijdschr Psychiatr. 2024;66(3):161-164.
7
Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation.大语言模型可能会改变行为医疗保健的未来:关于负责任开发与评估的建议
Npj Ment Health Res. 2024 Apr 2;3(1):12. doi: 10.1038/s44184-024-00056-z.
8
Digital transformation of mental health services.心理健康服务的数字化转型
Npj Ment Health Res. 2023 Aug 22;2(1):13. doi: 10.1038/s44184-023-00033-y.
9
An overview of and recommendations for more accessible digital mental health services.更易获取的数字心理健康服务概述与建议。
Nat Rev Psychol. 2022 Feb;1(2):87-100. doi: 10.1038/s44159-021-00003-1. Epub 2022 Jan 26.
10
Psychotherapy, artificial intelligence and adolescents: ethical aspects.心理治疗、人工智能和青少年:伦理方面。
J Prev Med Hyg. 2024 Jan 1;64(4):E438-E442. doi: 10.15167/2421-4248/jpmh2023.64.4.3135. eCollection 2023 Dec.