Dechant Martin, Lash Eva, Shokr Sarah, O'Driscoll Ciarán
University College London, 1-19 Torrington Place, London, WC1E 7HB, United Kingdom, +44 (0) 20 7679 1897.
University of Oxford, Oxford, United Kingdom.
JMIR Form Res. 2025 Jul 18;9:e74411. doi: 10.2196/74411.
Digital interventions have been proposed as a solution to meet the growing demand for mental health support. Large language models (LLMs) have emerged as a promising technology for creating more personalized and adaptive mental health chatbots. While LLMs generate responses based on statistical patterns in training data rather than through conscious reasoning, they can be designed to support important psychological processes. Prospection-the ability to envision and plan for future outcomes-represents a transdiagnostic process altered across various mental health conditions that could be effectively targeted through such interventions. We designed "Future Me," an LLM-powered chatbot designed to facilitate future-oriented thinking and promote goal pursuit using evidence-based interventions including visualization, implementation intentions, and values clarification.
This study aims to understand how users engage with Future Me, evaluate its effectiveness in supporting future-oriented thinking, and assess its acceptability across different populations, with particular attention to postgraduate students' stress management needs. We also seek to identify design improvements that could enhance the chatbot's ability to support users' mental well-being.
In total, 2 complementary studies were conducted. Study 1 (n=20) examined how postgraduate students used Future Me during a single guided session, followed by semistructured interviews. Study 2 (n=14) investigated how postgraduate students interacted with Future Me over a 1-week period, with interviews before and after usage. Both studies analyzed conversation transcripts and interview data using thematic analysis to understand usage patterns, perceived benefits, and limitations.
Across both studies, participants primarily engaged with Future Me to discuss career or education goals, personal obstacles, and relationship concerns. Users valued Future Me's ability to provide clarity around goal-setting (85% of participants), its nonjudgmental nature, and its 24/7 accessibility (58%). Future Me effectively facilitated self-reflection (80%) and offered new perspectives (70%), particularly for broader future-oriented concerns. However, both studies revealed limitations in the chatbot's ability to provide personalized emotional support during high-stress situations, with participants noting that responses sometimes felt formulaic (50%) or lacked emotional depth. Postgraduate students specifically emphasized the need for greater context awareness during periods of academic stress (58%). Overall, 57% of requests occurred outside office hours, dropping from 40 on day 1 to 12 by day 7.
Future Me demonstrates promise as an accessible tool for promoting prospection skills and supporting mental well-being through future-oriented thinking. However, effectiveness appears context-dependent, with prospection techniques more suitable for broader life decisions than acute stress situations. Future development should focus on creating more adaptive systems that can adjust their approach based on the user's emotional state and immediate needs. Rather than attempting to replicate human therapy entirely, chatbots like Future Me may be most effective when designed as complementary tools within broader support ecosystems, offering immediate guidance while facilitating connections to human support when needed.
数字干预措施已被提议作为满足心理健康支持不断增长需求的一种解决方案。大语言模型(LLMs)已成为一种有前景的技术,可用于创建更个性化和适应性更强的心理健康聊天机器人。虽然大语言模型是基于训练数据中的统计模式生成回复,而非通过有意识的推理,但它们可以被设计用来支持重要的心理过程。前瞻性——设想和规划未来结果的能力——是一种跨诊断过程,在各种心理健康状况下都会发生改变,可通过此类干预措施有效针对这一过程。我们设计了“未来的我”,这是一个由大语言模型驱动的聊天机器人,旨在通过包括可视化、实施意图和价值观澄清等循证干预措施,促进以未来为导向的思考并推动目标追求。
本研究旨在了解用户如何与“未来的我”互动,评估其在支持以未来为导向的思考方面的有效性,并评估其在不同人群中的可接受性,尤其关注研究生的压力管理需求。我们还试图确定可以增强聊天机器人支持用户心理健康能力的设计改进措施。
总共进行了两项互补性研究。研究1(n = 20)考察了研究生在一次单次引导会话期间如何使用“未来的我”,随后进行半结构化访谈。研究2(n = 14)调查了研究生在为期1周的时间里如何与“未来的我”互动,在使用前后进行访谈。两项研究均使用主题分析来分析对话记录和访谈数据,以了解使用模式、感知到的益处和局限性。
在两项研究中,参与者主要与“未来的我”讨论职业或教育目标、个人障碍以及人际关系问题。用户重视“未来的我”在明确目标设定方面的能力(85%的参与者)、其无评判性的性质以及其全天候的可访问性(58%)。“未来的我”有效地促进了自我反思(80%)并提供了新的视角(70%),特别是对于更广泛的以未来为导向的问题。然而,两项研究都揭示了聊天机器人在高压力情况下提供个性化情感支持能力的局限性,参与者指出回复有时感觉公式化(50%)或缺乏情感深度。研究生特别强调在学业压力期间需要更强的情境意识(58%)。总体而言,57%的请求发生在办公时间之外,从第1天的40次降至第7天的12次。
“未来的我”作为一种可获取的工具,在通过以未来为导向的思考促进前瞻性技能和支持心理健康方面显示出前景。然而,有效性似乎取决于情境,前瞻性技术更适用于更广泛的生活决策,而非急性压力情况。未来的发展应侧重于创建更具适应性的系统,能够根据用户的情绪状态和即时需求调整其方法。像“未来的我”这样的聊天机器人,与其试图完全复制人类治疗,在作为更广泛支持生态系统中的补充工具进行设计时可能最有效,在提供即时指导的同时,便于在需要时与人类支持建立联系。