Soni Neetu, Ora Manish, Agarwal Amit, Yang Tianbao, Bathla Girish
Department of Radiology, Mayo Clinic, Jacksonville, 4500 San Pablo Road, Jacksonville, FL 55902, USA (N.S, A.A), Department of Nuclear Medicine, Sanjay Gandhi post graduate institute of medical science, Lucknow 226014, India, (MO), Department of Computer Science & Engineering Texas A&M University (T.Y), Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN 55902, USA (G.B).
AJNR Am J Neuroradiol. 2024 Nov 21. doi: 10.3174/ajnr.A8589.
In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, Multi-Modal Large Language Models (MM-LLMs), including Vision Language Models (VLMs), have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA, etc.), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. MM-LLMs, a subset of foundation models, are trained on multimodal datasets, integrating text with another modality, such as images, to better learn universal representations akin to human cognition. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation.Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data, summarizing reports and relevant literature, providing diagnostic, treatment, and follow-up recommendations, and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.ABBREVIATIONS: AI: Artificial Intelligence; BERT: Bidirectional Encoder Representations from Transformers; CLIP: Contrastive Language-Image Pretraining; FM: Foundation Models; GPT: Generative Pre-trained Transformer; LLM: Large language model; NLP: natural language processing; VLM: Vision Language Models.
近年来,生成式人工智能(AI),特别是大语言模型(LLMs)及其多模态对应物,即多模态大语言模型(MM-LLMs),包括视觉语言模型(VLMs),在全球人工智能讨论中引起了相当大的兴趣。大语言模型,即预训练语言模型(如ChatGPT、Med-PaLM、LLaMA等),是在大量文本数据上训练的神经网络架构,在语言理解和生成方面表现出色。MM-LLMs作为基础模型的一个子集,是在多模态数据集上训练的,将文本与另一种模态(如图像)集成,以更好地学习类似于人类认知的通用表示。这种多功能性使它们在聊天机器人、翻译和创意写作等任务中表现出色,同时通过迁移学习、联邦学习和合成数据创建促进知识共享。这些模型中的几个在医学领域可能有潜在的吸引人的应用,包括但不限于通过处理患者数据来改善患者护理、总结报告和相关文献、提供诊断、治疗和随访建议,以及编码和计费等辅助任务。随着放射科医生进入这个充满希望但未知的领域,他们必须熟悉大语言模型的基本术语和过程。在此,我们概述了大语言模型及其在成像领域的潜在应用和挑战。
AI:人工智能;BERT:来自变换器的双向编码器表示;CLIP:对比语言-图像预训练;FM:基础模型;GPT:生成式预训练变换器;LLM:大语言模型;NLP:自然语言处理;VLM:视觉语言模型。