Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
Training Office, National Institute of Health, Rome, Italy.
Front Public Health. 2023 Apr 25;11:1166120. doi: 10.3389/fpubh.2023.1166120. eCollection 2023.
Large Language Models (LLMs) have recently gathered attention with the release of ChatGPT, a user-centered chatbot released by OpenAI. In this perspective article, we retrace the evolution of LLMs to understand the revolution brought by ChatGPT in the artificial intelligence (AI) field. The opportunities offered by LLMs in supporting scientific research are multiple and various models have already been tested in Natural Language Processing (NLP) tasks in this domain. The impact of ChatGPT has been huge for the general public and the research community, with many authors using the chatbot to write part of their articles and some papers even listing ChatGPT as an author. Alarming ethical and practical challenges emerge from the use of LLMs, particularly in the medical field for the potential impact on public health. Infodemic is a trending topic in public health and the ability of LLMs to rapidly produce vast amounts of text could leverage misinformation spread at an unprecedented scale, this could create an "AI-driven infodemic," a novel public health threat. Policies to contrast this phenomenon need to be rapidly elaborated, the inability to accurately detect artificial-intelligence-produced text is an unresolved issue.
大型语言模型(LLMs)最近随着 OpenAI 发布的用户为中心的聊天机器人 ChatGPT 而受到关注。在这篇观点文章中,我们追溯了 LLM 的发展历程,以了解 ChatGPT 在人工智能(AI)领域带来的革命。LLMs 在支持科学研究方面提供了多种机会,并且已经在该领域的自然语言处理(NLP)任务中测试了各种模型。ChatGPT 对公众和研究界产生了巨大的影响,许多作者使用聊天机器人来撰写文章的一部分,甚至有些论文将 ChatGPT 列为作者。使用 LLM 带来了令人震惊的伦理和实际挑战,特别是在医学领域,因为它可能对公共卫生产生影响。信息疫情是公共卫生领域的一个热门话题,LLMs 能够快速生成大量文本的能力可能会以前所未有的规模放大错误信息的传播,这可能会造成一种“人工智能驱动的信息疫情”,这是一种新的公共卫生威胁。需要迅速制定政策来对抗这种现象,无法准确检测人工智能生成的文本是一个未解决的问题。