Digital Health Cooperative Research Centre, Melbourne, Australia.
EBioMedicine. 2023 Apr;90:104512. doi: 10.1016/j.ebiom.2023.104512. Epub 2023 Mar 15.
Large Language Models (LLMs) are a key component of generative artificial intelligence (AI) applications for creating new content including text, imagery, audio, code, and videos in response to textual instructions. Without human oversight, guidance and responsible design and operation, such generative AI applications will remain a party trick with substantial potential for creating and spreading misinformation or harmful and inaccurate content at unprecedented scale. However, if positioned and developed responsibly as companions to humans augmenting but not replacing their role in decision making, knowledge retrieval and other cognitive processes, they could evolve into highly efficient, trustworthy, assistive tools for information management. This perspective describes how such tools could transform data management workflows in healthcare and medicine, explains how the underlying technology works, provides an assessment of risks and limitations, and proposes an ethical, technical, and cultural framework for responsible design, development, and deployment. It seeks to incentivise users, developers, providers, and regulators of generative AI that utilises LLMs to collectively prepare for the transformational role this technology could play in evidence-based sectors.
大型语言模型(LLMs)是生成式人工智能(AI)应用的关键组成部分,可根据文本指令生成新的内容,包括文本、图像、音频、代码和视频。如果没有人类的监督、指导以及负责任的设计和运营,这种生成式 AI 应用将仍然只是一个噱头,具有在前所未有的规模上生成和传播错误信息或有害和不准确内容的巨大潜力。然而,如果将其负责任地定位和开发为人类的助手,增强而不是取代他们在决策、知识检索和其他认知过程中的作用,那么它们可以演变成高度高效、值得信赖、辅助信息管理的工具。本观点描述了这些工具如何改变医疗保健和医学领域的数据管理工作流程,解释了底层技术的工作原理,评估了风险和局限性,并为负责任的设计、开发和部署提出了一个伦理、技术和文化框架。它旨在激励生成式 AI 的用户、开发人员、提供者和监管者利用 LLM 共同为这项技术在循证领域可能发挥的变革性作用做好准备。