Suppr超能文献

通过本地大语言模型提高放射学报告的简洁性和结构

Improving Radiology Report Conciseness and Structure via Local Large Language Models.

作者信息

Hartsock Iryna, Araujo Cyrillo, Folio Les, Rasool Ghulam

机构信息

Department of Machine Learning, Moffitt Cancer Center and Research Institute, Tampa, FL, USA.

Deparmtent of Diagnostic Imaging and Interventional Radiology, Moffitt Cancer Center and Research Institute, Tampa, FL, USA.

出版信息

J Imaging Inform Med. 2025 Apr 21. doi: 10.1007/s10278-025-01510-w.

Abstract

Radiology reports are often lengthy and unstructured, posing challenges for referring physicians to quickly identify critical imaging findings while increasing risk of missed information. This retrospective study aimed to enhance radiology reports by making them concise and well-structured, with findings organized by relevant organs. To achieve this, we utilized private large language models (LLMs) deployed locally within our institution's firewall, ensuring data security and minimizing computational costs. Using a dataset of 814 radiology reports from seven board-certified body radiologists at [-blinded for review-], we tested five prompting strategies within the LangChain framework. After evaluating several models, the Mixtral LLM demonstrated superior adherence to formatting requirements compared to alternatives like Llama. The optimal strategy involved condensing reports first and then applying structured formatting based on specific instructions, reducing verbosity while improving clarity. Across all radiologists and reports, the Mixtral LLM reduced redundant word counts by more than 53%. These findings highlight the potential of locally deployed, open-source LLMs to streamline radiology reporting. By generating concise, well-structured reports, these models enhance information retrieval and better meet the needs of referring physicians, ultimately improving clinical workflows.

摘要

放射学报告通常冗长且无结构,这给转诊医生快速识别关键影像结果带来了挑战,同时增加了信息遗漏的风险。这项回顾性研究旨在通过使放射学报告简洁且结构良好,按相关器官组织结果,来改进放射学报告。为实现这一目标,我们利用了部署在我们机构防火墙内的私有大语言模型(LLMs),确保数据安全并将计算成本降至最低。我们使用了来自[-审查时 blinded-]的七位具有委员会认证的身体放射科医生的814份放射学报告数据集,在LangChain框架内测试了五种提示策略。在评估了几个模型后,与Llama等替代模型相比,Mixtral LLM在遵循格式要求方面表现出优越性。最佳策略是先压缩报告,然后根据特定指令应用结构化格式,减少冗长性同时提高清晰度。在所有放射科医生和报告中,Mixtral LLM将冗余字数减少了超过53%。这些发现凸显了本地部署的开源LLMs在简化放射学报告方面的潜力。通过生成简洁、结构良好的报告,这些模型增强了信息检索能力,更好地满足了转诊医生的需求,最终改善了临床工作流程。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验