Department of Psychiatry, University of Oxford, Oxford, United Kingdom.
Department of Psychiatry, University of Oxford, Oxford, United Kingdom; Centre for Artificial Intelligence in Precision Medicines, University of Oxford, United Kingdom; King Abdulaziz University, Saudi Arabia.
Artif Intell Med. 2024 Nov;157:103002. doi: 10.1016/j.artmed.2024.103002. Epub 2024 Oct 23.
The entry of large language models (LLMs) into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability. This was followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (PEFT) methods. We present an investigation into the suitability of different PEFT methods to clinical decision-making tasks, across a range of model sizes, including extremely small models with as few as 25 million parameters. Our analysis shows that the performance of most PEFT approaches varies significantly from one task to another, with the exception of LoRA, which maintains relatively high performance across all model sizes and tasks, typically approaching or matching full fine-tuned performance. The effectiveness of PEFT methods in the clinical domain is evident, particularly for specialised models which can operate on low-cost, in-house computing infrastructure. The advantages of these models, in terms of speed and reduced training costs, dramatically outweighs any performance gain from large foundation LLMs. Furthermore, we highlight how domain-specific pre-training interacts with PEFT methods and model size, finding the domain pre-training to be particularly important in smaller models and discuss how these factors interplay to provide the best efficiency-performance trade-off. Full code available at: https://github.com/nlpie-research/efficient-ml.
大型语言模型 (LLM) 进入研究和商业领域,导致了模型越来越大的趋势,最初承诺具有通用性。随后,人们普遍希望通过参数高效微调 (PEFT) 方法,缩小模型规模并创建专门的模型,而无需进行完整的微调。我们研究了不同的 PEFT 方法在各种模型规模下,包括参数数量少至 2500 万的极小型模型,用于临床决策任务的适用性。我们的分析表明,大多数 PEFT 方法的性能在不同任务之间差异很大,除了 LoRA,它在所有模型大小和任务中都保持相对较高的性能,通常接近或匹配全精调性能。PEFT 方法在临床领域的有效性是明显的,特别是对于可以在低成本内部计算基础设施上运行的专用模型。这些模型在速度和降低培训成本方面的优势,大大超过了大型基础 LLM 带来的任何性能提升。此外,我们还强调了特定于领域的预训练与 PEFT 方法和模型大小之间的相互作用,发现领域预训练在较小的模型中特别重要,并讨论了这些因素如何相互作用,以提供最佳的效率-性能权衡。完整的代码可在 https://github.com/nlpie-research/efficient-ml 上获得。