Suppr超能文献

微调本地的LLaMA-3大语言模型以在放射肿瘤学中自动生成隐私保护医生信件。

Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology.

作者信息

Hou Yihao, Bert Christoph, Gomaa Ahmed, Lahmer Godehard, Höfler Daniel, Weissmann Thomas, Voigt Raphaela, Schubert Philipp, Schmitter Charlotte, Depardon Alina, Semrau Sabine, Maier Andreas, Fietkau Rainer, Huang Yixing, Putz Florian

机构信息

Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

出版信息

Front Artif Intell. 2025 Jan 14;7:1493716. doi: 10.3389/frai.2024.1493716. eCollection 2024.

Abstract

INTRODUCTION

Generating physician letters is a time-consuming task in daily clinical practice.

METHODS

This study investigates local fine-tuning of large language models (LLMs), specifically LLaMA models, for physician letter generation in a privacy-preserving manner within the field of radiation oncology.

RESULTS

Our findings demonstrate that base LLaMA models, without fine-tuning, are inadequate for effectively generating physician letters. The QLoRA algorithm provides an efficient method for local intra-institutional fine-tuning of LLMs with limited computational resources (i.e., a single 48 GB GPU workstation within the hospital). The fine-tuned LLM successfully learns radiation oncology-specific information and generates physician letters in an institution-specific style. ROUGE scores of the generated summary reports highlight the superiority of the 8B LLaMA-3 model over the 13B LLaMA-2 model. Further multidimensional physician evaluations of 10 cases reveal that, although the fine-tuned LLaMA-3 model has limited capacity to generate content beyond the provided input data, it successfully generates salutations, diagnoses and treatment histories, recommendations for further treatment, and planned schedules. Overall, clinical benefit was rated highly by the clinical experts (average score of 3.4 on a 4-point scale).

DISCUSSION

With careful physician review and correction, automated LLM-based physician letter generation has significant practical value.

摘要

引言

在日常临床实践中,生成医生信件是一项耗时的任务。

方法

本研究调查了大语言模型(LLMs),特别是LLaMA模型的局部微调,以便在放射肿瘤学领域以隐私保护的方式生成医生信件。

结果

我们的研究结果表明,未经微调的基础LLaMA模型不足以有效生成医生信件。QLoRA算法为在有限计算资源(即医院内一台48GB的GPU工作站)下对LLMs进行机构内部局部微调提供了一种有效方法。经过微调的LLM成功学习了放射肿瘤学的特定信息,并以机构特定的风格生成医生信件。生成的总结报告的ROUGE分数突出了8B LLaMA - 3模型优于13B LLaMA - 2模型。对10个病例的进一步多维度医生评估表明,尽管经过微调的LLaMA - 3模型生成超出所提供输入数据内容的能力有限,但它成功生成了称呼、诊断和治疗史、进一步治疗建议以及计划安排。总体而言,临床专家对临床效益给予了高度评价(在4分制中平均得分为3.4分)。

讨论

经过医生仔细审查和纠正,基于LLM的自动化医生信件生成具有重要的实用价值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/69db/11772293/4b50195f1994/frai-07-1493716-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验