AlSammarraie AlHasan, Al-Saifi Ali, Kamhia Hassan, Aboagla Mohamed, Househ Mowafa
Hamad Bin Khalifa University College of Science and Engineering, Doha, Qatar
Applab, Doha, Qatar.
BMJ Health Care Inform. 2025 Jul 25;32(1):e101570. doi: 10.1136/bmjhci-2025-101570.
To develop and evaluate an agentic retrieval augmented generation (ARAG) framework using open-source large language models (LLMs) for generating evidence-based Arabic patient education materials (PEMs) and assess the LLMs capabilities as validation agents tasked with blocking harmful content.
We selected 12 LLMs and applied four experimental setups (base, base+prompt engineering, ARAG, and ARAG+prompt engineering). PEM generation quality was assessed via two-stage evaluation (automated LLM, then expert review) using 5 metrics (accuracy, readability, comprehensiveness, appropriateness and safety) against ground truth. Validation agent (VA) performance was evaluated separately using a harmful/safe PEM dataset, measuring blocking accuracy.
ARAG-enabled setups yielded the best generation performance for 10/12 LLMs. Arabic-focused models occupied the top 9 ranks. Expert evaluation ranking mirrored the automated ranking. AceGPT-v2-32B with ARAG and prompt engineering (setup 4) was confirmed highest-performing. VA accuracy correlated strongly with model size; only models ≥27B parameters achieved >0.80 accuracy. Fanar-7B performed well in generation but poorly as a VA.
Arabic-centred models demonstrated advantages for the Arabic PEM generation task. ARAG enhanced generation quality, although context limits impacted large-context models. The validation task highlighted model size as critical for reliable performance.
ARAG noticeably improves Arabic PEM generation, particularly with Arabic-centred models like AceGPT-v2-32B. Larger models appear necessary for reliable harmful content validation. Automated evaluation showed potential for ranking systems, aligning with expert judgement for top performers.
开发并评估一种使用开源大语言模型(LLM)的能动检索增强生成(ARAG)框架,用于生成循证阿拉伯语患者教育材料(PEM),并评估LLM作为负责阻止有害内容的验证代理的能力。
我们选择了12个LLM,并应用了四种实验设置(基础设置、基础设置+提示工程、ARAG和ARAG+提示工程)。通过两阶段评估(自动LLM评估,然后专家评审),使用5个指标(准确性、可读性、全面性、适当性和安全性)与真实情况对比,评估PEM生成质量。使用有害/安全PEM数据集单独评估验证代理(VA)的性能,测量阻止准确性。
启用ARAG的设置对12个LLM中的10个产生了最佳生成性能。以阿拉伯语为重点的模型占据了前9名。专家评估排名反映了自动排名。具有ARAG和提示工程的AceGPT-v2-32B(设置4)被确认为性能最高。VA准确性与模型大小密切相关;只有参数≥27B的模型才能达到>0.80的准确性。Fanar-7B在生成方面表现良好,但作为VA表现不佳。
以阿拉伯语为中心的模型在阿拉伯语PEM生成任务中显示出优势。ARAG提高了生成质量,尽管上下文限制影响了大上下文模型。验证任务突出了模型大小对于可靠性能的关键作用。
ARAG显著提高了阿拉伯语PEM的生成,特别是使用像AceGPT-v2-32B这样以阿拉伯语为中心的模型。对于可靠的有害内容验证,似乎需要更大的模型。自动评估显示了排名系统的潜力,与顶级性能者的专家判断一致。