Sharma Hunny, Ruikar Manisha
Department of Community and Family Medicine, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India.
Perspect Clin Res. 2024 Jul-Sep;15(3):108-115. doi: 10.4103/picr.picr_196_23. Epub 2023 Dec 19.
Chat generative pretrained transformer (ChatGPT) is a conversational language model powered by artificial intelligence (AI). It is a sophisticated language model that employs deep learning methods to generate human-like text outputs to inputs in the natural language. This narrative review aims to shed light on ethical concerns about using AI models like ChatGPT in writing assistance in the health care and medical domains. Currently, all the AI models like ChatGPT are in the infancy stage; there is a risk of inaccuracy of the generated content, lack of contextual understanding, dynamic knowledge gaps, limited discernment, lack of responsibility and accountability, issues of privacy, data security, transparency, and bias, lack of nuance, and originality. Other issues such as authorship, unintentional plagiarism, falsified and fabricated content, and the threat of being red-flagged as AI-generated content highlight the need for regulatory compliance, transparency, and disclosure. If the legitimate issues are proactively considered and addressed, the potential applications of AI models as writing assistance could be rewarding.
聊天生成预训练变换器(ChatGPT)是一种由人工智能(AI)驱动的对话语言模型。它是一个复杂的语言模型,采用深度学习方法,针对自然语言输入生成类人文本输出。这篇叙述性综述旨在阐明在医疗保健和医学领域将ChatGPT等人工智能模型用于写作辅助时的伦理问题。目前,所有像ChatGPT这样的人工智能模型都处于起步阶段;存在生成内容不准确、缺乏上下文理解、动态知识差距、辨别力有限、缺乏责任和问责制、隐私、数据安全、透明度和偏差问题、缺乏细微差别和原创性等风险。其他问题,如作者身份、无意抄袭、伪造和编造内容以及被标记为人工智能生成内容的风险,凸显了监管合规、透明度和披露的必要性。如果能积极考虑并解决这些合理问题,人工智能模型作为写作辅助的潜在应用可能会很有价值。