Sadeh-Sharvit Shiri, Rego Simon A, Jefroykin Samuel, Peretz Gal, Kupershmidt Tomer
Center for m2Health, Palo Alto University, Palo Alto, CA, United States.
Eleos Health, Waltham, MA, United States.
JMIR Form Res. 2022 Aug 16;6(8):e39846. doi: 10.2196/39846.
Although behavioral interventions have been found to be efficacious and effective in randomized clinical trials for most mental illnesses, the quality and efficacy of mental health care delivery remains inadequate in real-world settings, partly owing to suboptimal treatment fidelity. This "therapist drift" is an ongoing issue that ultimately reduces the effectiveness of treatments; however, until recently, there have been limited opportunities to assess adherence beyond large randomized controlled trials.
This study explored therapists' use of a standard component that is pertinent across most behavioral treatments-prompting clients to summarize their treatment session as a means for consolidating and augmenting their understanding of the session and the treatment plan.
The data set for this study comprised 17,607 behavioral treatment sessions administered by 322 therapists to 3519 patients in 37 behavioral health care programs across the United States. Sessions were captured by a therapy-specific artificial intelligence (AI) platform, and an automatic speech recognition system transcribed the treatment meeting and separated the data to the therapist and client utterances. A search for possible session summary prompts was then conducted, with 2 psychologists validating the text that emerged.
We found that despite clinical recommendations, only 54 (0.30%) sessions included a summary. Exploratory analyses indicated that session summaries mostly addressed relationships (n=27), work (n=20), change (n=6), and alcohol (n=5). Sessions with meeting summaries were also characterized by greater therapist interventions and included greater use of validation, complex reflections, and proactive problem-solving techniques.
To the best of our knowledge, this is the first study to assess a large, diverse data set of real-world treatment practices. Our findings provide evidence that fidelity with the core components of empirically designed psychological interventions is a challenge in real-world settings. The results of this study can inform the development of machine learning and AI algorithms and offer nuanced, timely feedback to providers, thereby improving the delivery of evidence-based practices and quality of mental health care services and facilitating better clinical outcomes in real-world settings.
尽管在大多数精神疾病的随机临床试验中发现行为干预有效,但在现实环境中,精神卫生保健服务的质量和效果仍然不足,部分原因是治疗保真度欠佳。这种“治疗师偏差”是一个持续存在的问题,最终会降低治疗效果;然而,直到最近,除了大型随机对照试验外,评估依从性的机会有限。
本研究探讨治疗师对一个标准组成部分的使用情况,该组成部分在大多数行为治疗中都适用——促使来访者总结他们的治疗过程,以此巩固并加深他们对治疗过程和治疗计划的理解。
本研究的数据集包括美国37个行为健康保健项目中322名治疗师为3519名患者提供的17607次行为治疗。治疗过程由特定于治疗的人工智能(AI)平台记录,自动语音识别系统转录治疗会面内容,并将数据分为治疗师和来访者的话语。然后搜索可能的治疗过程总结提示语,由2名心理学家对出现的文本进行验证。
我们发现,尽管有临床建议,但只有54次(0.30%)治疗过程包含总结。探索性分析表明,治疗过程总结主要涉及人际关系(n = 27)、工作(n = 20)、改变(n = 6)和酒精(n = 5)。有治疗过程总结的治疗过程还具有治疗师更多干预的特点,包括更多地使用确认、复杂的反思和积极的解决问题技巧。
据我们所知,这是第一项评估大量、多样的现实治疗实践数据集的研究。我们的研究结果表明,在现实环境中,遵循实证设计的心理干预核心组成部分具有挑战性。本研究结果可为机器学习和人工智能算法的开发提供参考,并为提供者提供细致、及时的反馈,从而改善循证实践的实施和精神卫生保健服务质量,并在现实环境中促进更好的临床结果。