Riedemann Lars, Labonne Maxime, Gilbert Stephen
Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany.
Liquid AI, Inc., 314 Main St., Cambridge, MA, 02142, USA.
NPJ Digit Med. 2024 Nov 27;7(1):339. doi: 10.1038/s41746-024-01344-w.
Large language models (LLMs) are increasingly applied in medical documentation and have been proposed for clinical decision support. We argue that the future for LLMs in medicine must be based on transparent and controllable open-source models. Openness enables medical tool developers to control the safety and quality of underlying AI models, while also allowing healthcare professionals to hold these models accountable. For these reasons, the future is open.
大语言模型(LLMs)在医学文档中的应用越来越广泛,并已被提议用于临床决策支持。我们认为,医学领域中LLMs的未来必须基于透明且可控的开源模型。开放性使医学工具开发者能够控制基础人工智能模型的安全性和质量,同时也让医疗保健专业人员能够追究这些模型的责任。基于这些原因,未来是开放的。