Luo Xiaochen, Ghosh Smita, Tilley Jacqueline L, Besada Patrica, Wang Jinqiu, Xiang Yangyang
Department of Counseling Psychology, Santa Clara University, Santa Clara, USA.
Department of Mathematics and Computer Science, Santa Clara University, Santa Clara, USA.
Digit Health. 2025 Jul 10;11:20552076251351088. doi: 10.1177/20552076251351088. eCollection 2025 Jan-Dec.
Generative artificial intelligence (genAI) has become popular for the general public to address mental health needs despite the lack of regulatory oversight. Our study used a digital ethnographic approach to understand the perspectives of individuals who engaged with a genAI tool, ChatGPT, for psychotherapeutic purposes.
We systematically collected and analyzed all Reddit posts from January 2024 containing the keywords "ChatGPT" and "therapy" in English. Using thematic analysis, we examined users' therapeutic intentions, patterns of engagement, and perceptions of both the appealing and unappealing aspects of using ChatGPT for mental health needs.
Our findings showed that users utilized ChatGPT to manage mental health problems, seek self-discovery, obtain companionship, and gain mental health literacy. Engagement patterns included using ChatGPT to simulate a therapist, coaching its responses, seeking guidance, re-enacting distressing events, externalizing thoughts, assisting real-life therapy, and disclosing personal secrets. Users found ChatGPT appealing due to perceived therapist-like qualities (e.g. emotional support, accurate understanding, and constructive feedback) and machine-like benefits (e.g. constant availability, expansive cognitive capacity, lack of negative reactions, and perceived objectivity). Concerns regarding privacy, emotional depth, and long-term growth were raised but rather infrequently.
Our findings highlighted how users exercised agency to co-create digital therapeutic spaces with genAI for mental health needs. Users developed varied internal representations of genAI, suggesting the tendency to cultivate mental relationships during the self-help process. The positive, and sometimes idealized, perceptions of genAI as objective, empathic, effective, and free from negativity pointed to both its therapeutic potential and risks that call for AI literacy and increased ethical awareness among the general public. We conclude with several research, clinical, ethical, and policy recommendations.
尽管缺乏监管,但生成式人工智能(genAI)已在普通大众中流行起来,以满足心理健康需求。我们的研究采用数字人种志方法,以了解使用genAI工具ChatGPT进行心理治疗的个人的观点。
我们系统地收集并分析了2024年1月以来所有包含英文关键词“ChatGPT”和“治疗”的Reddit帖子。通过主题分析,我们研究了用户的治疗意图、参与模式,以及对使用ChatGPT满足心理健康需求的吸引人之处和不吸引人之处的看法。
我们的研究结果表明,用户利用ChatGPT来管理心理健康问题(manage mental health problems)、寻求自我发现、获得陪伴,并提高心理健康素养。参与模式包括使用ChatGPT模拟治疗师、指导其回复、寻求指导、重演痛苦事件、外化想法、辅助现实生活中的治疗,以及透露个人秘密。用户认为ChatGPT具有吸引力,因为它具有类似治疗师的特质(如情感支持、准确理解和建设性反馈)以及类似机器的优点(如随时可用、认知能力广泛、没有负面反应和被认为具有客观性)。人们提出了对隐私、情感深度和长期成长的担忧,但并不常见。
我们的研究结果突出了用户如何发挥能动性,与genAI共同创建满足心理健康需求的数字治疗空间。用户对genAI形成了不同的内在认知,这表明在自助过程中有培养心理关系的倾向。对genAI作为客观、有同理心、有效且没有负面因素的积极(有时是理想化)看法,既指出了其治疗潜力,也指出了风险,这需要提高公众的人工智能素养和道德意识。我们最后提出了一些研究、临床、伦理和政策建议。