Meskó Bertalan, Topol Eric J
The Medical Futurist Institute, Budapest, Hungary.
Department of Behavioural Sciences, Semmelweis University, Budapest, Hungary.
NPJ Digit Med. 2023 Jul 6;6(1):120. doi: 10.1038/s41746-023-00873-0.
The rapid advancements in artificial intelligence (AI) have led to the development of sophisticated large language models (LLMs) such as GPT-4 and Bard. The potential implementation of LLMs in healthcare settings has already garnered considerable attention because of their diverse applications that include facilitating clinical documentation, obtaining insurance pre-authorization, summarizing research papers, or working as a chatbot to answer questions for patients about their specific data and concerns. While offering transformative potential, LLMs warrant a very cautious approach since these models are trained differently from AI-based medical technologies that are regulated already, especially within the critical context of caring for patients. The newest version, GPT-4, that was released in March, 2023, brings the potentials of this technology to support multiple medical tasks; and risks from mishandling results it provides to varying reliability to a new level. Besides being an advanced LLM, it will be able to read texts on images and analyze the context of those images. The regulation of GPT-4 and generative AI in medicine and healthcare without damaging their exciting and transformative potential is a timely and critical challenge to ensure safety, maintain ethical standards, and protect patient privacy. We argue that regulatory oversight should assure medical professionals and patients can use LLMs without causing harm or compromising their data or privacy. This paper summarizes our practical recommendations for what we can expect from regulators to bring this vision to reality.
人工智能(AI)的快速发展催生了复杂的大型语言模型(LLM),如GPT-4和Bard。LLM在医疗环境中的潜在应用已经引起了广泛关注,因为它们具有多种用途,包括促进临床文档记录、获得保险预授权、总结研究论文,或作为聊天机器人回答患者关于其特定数据和担忧的问题。虽然LLM具有变革潜力,但由于这些模型的训练方式与已受监管的基于AI的医疗技术不同,特别是在照顾患者的关键背景下,因此需要非常谨慎地对待。2023年3月发布的最新版本GPT-4,使这项技术支持多种医疗任务的潜力以及因处理其提供的结果时可靠性不同而产生的风险达到了一个新高度。除了作为一个先进的LLM,它还将能够读取图像上的文本并分析这些图像的上下文。在不损害GPT-4和生成式AI在医学和医疗保健领域令人兴奋的变革潜力的前提下对其进行监管,是确保安全、维护道德标准和保护患者隐私的一项及时且关键的挑战。我们认为,监管监督应确保医疗专业人员和患者能够使用LLM而不会造成伤害或危及他们的数据或隐私。本文总结了我们对监管机构的实际期望建议,以实现这一愿景。