Kim Sunkyu, Lee Choong-Kun, Kim Seung-Seob
J Korean Soc Radiol. 2024 Sep;85(5):861-882. doi: 10.3348/jksr.2024.0080. Epub 2024 Sep 27.
Large language models (LLMs) have revolutionized the global landscape of technology beyond the field of natural language processing. Owing to their extensive pre-training using vast datasets, contemporary LLMs can handle tasks ranging from general functionalities to domain-specific areas, such as radiology, without the need for additional fine-tuning. Importantly, LLMs are on a trajectory of rapid evolution, addressing challenges such as hallucination, bias in training data, high training costs, performance drift, and privacy issues, along with the inclusion of multimodal inputs. The concept of small, on-premise open source LLMs has garnered growing interest, as fine-tuning to medical domain knowledge, addressing efficiency and privacy issues, and managing performance drift can be effectively and simultaneously achieved. This review provides conceptual knowledge, actionable guidance, and an overview of the current technological landscape and future directions in LLMs for radiologists.
大语言模型(LLMs)已经彻底改变了自然语言处理领域之外的全球技术格局。由于使用大量数据集进行了广泛的预训练,当代大语言模型能够处理从一般功能到特定领域(如放射学)的各种任务,而无需额外的微调。重要的是,大语言模型正处于快速发展的轨道上,正在解决诸如幻觉、训练数据偏差、高训练成本、性能漂移和隐私问题等挑战,同时还纳入了多模态输入。小型本地开源大语言模型的概念越来越受到关注,因为可以有效地同时实现对医学领域知识的微调、解决效率和隐私问题以及管理性能漂移。本综述为放射科医生提供了关于大语言模型的概念性知识、可操作的指导,以及当前技术格局和未来方向的概述。