Huang Jiajie, Lai Honghao, Zhao Weilong, Xia Danni, Bai Chunyang, Sun Mingyao, Liu Jianing, Liu Jiayi, Pan Bei, Tian Jinhui, Ge Long
Department of Health Policy and Management, School of Public Health, Lanzhou University, Lanzhou, China.
Evidence-Based Social Science Research Center, School of Public Health, Lanzhou University, Lanzhou, China.
J Med Internet Res. 2025 Jun 24;27:e70450. doi: 10.2196/70450.
The revised Risk-of-Bias tool (RoB2) overcomes the limitations of its predecessor but introduces new implementation challenges. Studies demonstrate low interrater reliability and substantial time requirements for RoB2 implementation. Large language models (LLMs) may assist in RoB2 implementation, although their effectiveness remains uncertain.
This study aims to evaluate the accuracy of LLMs in RoB2 assessments to explore their potential as research assistants for bias evaluation.
We systematically searched the Cochrane Library (through October 2023) for reviews using RoB2, categorized by interest in adhering or assignment. From 86 eligible reviews of randomized controlled trials (covering 1399 RCTs), we randomly selected 46 RCTs (23 per category). In addition, 3 experienced reviewers independently assessed all 46 RCTs using RoB2, recording assessment time for each trial. Reviewer judgments were reconciled through consensus. Furthermore, 6 RCTs (3 from each category) were randomly selected for prompt development and optimization. The remaining 40 trials established the internal validation standard, while Cochrane Reviews judgments served as external validation. Primary outcomes were extracted as reported in corresponding Cochrane Reviews. We calculated accuracy rates, Cohen κ, and time differentials.
We identified significant differences between Cochrane and reviewer judgments, particularly in domains 1, 4, and 5, likely due to different standards in assessing randomization and blinding. Among the 20 articles focusing on adhering, 18 Cochrane Reviews and 19 reviewer judgments classified them as "High risk," while assignment-focused RCTs showed more heterogeneous risk distribution. Compared with Cochrane Reviews, LLMs demonstrated accuracy rates of 57.5% and 70% for overall (assignment) and overall (adhering), respectively. When compared with reviewer judgments, LLMs' accuracy rates were 65% and 70% for these domains. The average accuracy rates for the remaining 6 domains were 65.2% (95% CI 57.6-72.7) against Cochrane Reviews and 74.2% (95% CI 64.7-83.9) against reviewers. At the signaling question level, LLMs achieved 83.2% average accuracy (95% CI 77.5-88.9), with accuracy exceeding 70% for most questions except 2.4 (assignment), 2.5 (assignment), 3.3, and 3.4. When domain judgments were derived from LLM-generated signaling questions using the RoB2 algorithm rather than direct LLM domain judgments, accuracy improved substantially for Domain 2 (adhering; 55-95) and overall (adhering; 70-90). LLMs demonstrated high consistency between iterations (average 85.2%, 95% CI 85.15-88.79) and completed assessments in 1.9 minutes versus 31.5 minutes for human reviewers (mean difference 29.6, 95% CI 25.6-33.6 minutes).
LLMs achieved commendable accuracy when guided by structured prompts, particularly through processing methodological details through structured reasoning. While not replacing human assessment, LLMs demonstrate strong potential for assisting RoB2 evaluations. Larger studies with improved prompting could enhance performance.
修订后的偏倚风险工具(RoB2)克服了其前身的局限性,但带来了新的实施挑战。研究表明,RoB2实施过程中评分者间信度较低且耗时较长。大语言模型(LLM)可能有助于RoB2的实施,但其有效性仍不确定。
本研究旨在评估大语言模型在RoB2评估中的准确性,以探索其作为偏倚评估研究助手的潜力。
我们系统检索了Cochrane图书馆(截至2023年10月)中使用RoB2的综述,按对依从性或分配的关注进行分类。从86篇符合条件的随机对照试验综述(涵盖1399项随机对照试验)中,我们随机选择了46项随机对照试验(每个类别23项)。此外,3名经验丰富的评审员使用RoB2对所有46项随机对照试验进行独立评估,记录每项试验的评估时间。评审员的判断通过共识达成一致。此外,随机选择6项随机对照试验(每个类别3项)用于提示开发和优化。其余40项试验建立内部验证标准,而Cochrane综述的判断用作外部验证。主要结局按照相应Cochrane综述中的报告进行提取。我们计算了准确率、Cohen κ系数和时间差。
我们发现Cochrane综述与评审员的判断之间存在显著差异,特别是在领域1、4和5中,这可能是由于评估随机化和盲法的标准不同。在20篇关注依从性的文章中,18篇Cochrane综述和19名评审员的判断将其归类为“高风险”,而以分配为重点的随机对照试验显示出更异质的风险分布。与Cochrane综述相比,大语言模型在总体(分配)和总体(依从性)方面的准确率分别为57.5%和70%。与评审员的判断相比,大语言模型在这些领域的准确率分别为65%和70%。其余6个领域相对于Cochrane综述的平均准确率为65.2%(95%CI 57.6 - 72.7),相对于评审员的平均准确率为74.2%(95%CI 64.7 - 83.9)。在信号问题层面,大语言模型的平均准确率达到83.2%(95%CI 77.5 - 88.9),除2.4(分配)、2.5(分配)、3.3和3.4外,大多数问题的准确率超过70%。当使用RoB2算法从大语言模型生成的信号问题中得出领域判断,而不是直接由大语言模型进行领域判断时,领域2(依从性;55 - 95)和总体(依从性;70 - 90)的准确率大幅提高。大语言模型在各轮迭代之间表现出高度一致性(平均85.2%,95%CI 85.15 - 88.79),完成评估用时1.9分钟,而人类评审员用时31.5分钟(平均差异29.6,95%CI 25.6 - 33.6分钟)。
在结构化提示的指导下,大语言模型取得了值得称赞的准确率,特别是通过结构化推理处理方法细节。虽然不能取代人工评估,但大语言模型在协助RoB二世评估方面显示出强大潜力。通过改进提示进行的更大规模研究可能会提高其性能。