Meyer Jesse G, Urbanowicz Ryan J, Martin Patrick C N, O'Connor Karen, Li Ruowang, Peng Pei-Chen, Bright Tiffani J, Tatonetti Nicholas, Won Kyoung Jae, Gonzalez-Hernandez Graciela, Moore Jason H
Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles California, USA.
Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.
BioData Min. 2023 Jul 13;16(1):20. doi: 10.1186/s13040-023-00339-9.
The introduction of large language models (LLMs) that allow iterative "chat" in late 2022 is a paradigm shift that enables generation of text often indistinguishable from that written by humans. LLM-based chatbots have immense potential to improve academic work efficiency, but the ethical implications of their fair use and inherent bias must be considered. In this editorial, we discuss this technology from the academic's perspective with regard to its limitations and utility for academic writing, education, and programming. We end with our stance with regard to using LLMs and chatbots in academia, which is summarized as (1) we must find ways to effectively use them, (2) their use does not constitute plagiarism (although they may produce plagiarized text), (3) we must quantify their bias, (4) users must be cautious of their poor accuracy, and (5) the future is bright for their application to research and as an academic tool.
2022年末推出的允许进行迭代“聊天”的大语言模型(LLMs)是一种范式转变,能够生成通常与人类撰写的文本难以区分的内容。基于大语言模型的聊天机器人具有提高学术工作效率的巨大潜力,但必须考虑其合理使用的伦理影响和内在偏见。在这篇社论中,我们从学者的角度讨论这项技术在学术写作、教育和编程方面的局限性和实用性。我们最后阐述了在学术界使用大语言模型和聊天机器人的立场,总结如下:(1)我们必须找到有效使用它们的方法;(2)它们的使用不构成抄袭(尽管它们可能生成抄袭文本);(3)我们必须量化它们的偏见;(4)用户必须警惕它们较差的准确性;(5)它们应用于研究和作为学术工具的前景光明。