Hofstra University, Hempstead, NY, United States.
J Med Internet Res. 2023 Nov 28;25:e47551. doi: 10.2196/47551.
Artificial intelligence (AI) chatbots like ChatGPT and Google Bard are computer programs that use AI and natural language processing to understand customer questions and generate natural, fluid, dialogue-like responses to their inputs. ChatGPT, an AI chatbot created by OpenAI, has rapidly become a widely used tool on the internet. AI chatbots have the potential to improve patient care and public health. However, they are trained on massive amounts of people's data, which may include sensitive patient data and business information. The increased use of chatbots introduces data security issues, which should be handled yet remain understudied. This paper aims to identify the most important security problems of AI chatbots and propose guidelines for protecting sensitive health information. It explores the impact of using ChatGPT in health care. It also identifies the principal security risks of ChatGPT and suggests key considerations for security risk mitigation. It concludes by discussing the policy implications of using AI chatbots in health care.
人工智能(AI)聊天机器人,如 ChatGPT 和 Google Bard,是使用 AI 和自然语言处理技术来理解客户问题并根据输入生成自然、流畅、对话式回复的计算机程序。由 OpenAI 创建的 AI 聊天机器人 ChatGPT 已迅速成为互联网上广泛使用的工具。AI 聊天机器人有可能改善患者护理和公共卫生。但是,它们是在大量人员的数据上进行训练的,这些数据可能包括敏感的患者数据和商业信息。聊天机器人的使用增加了数据安全问题,这些问题应该得到处理,但仍需要进一步研究。本文旨在确定 AI 聊天机器人最重要的安全问题,并提出保护敏感健康信息的指南。它探讨了在医疗保健中使用 ChatGPT 的影响。它还确定了 ChatGPT 的主要安全风险,并为降低安全风险提出了关键考虑因素。最后,它讨论了在医疗保健中使用 AI 聊天机器人的政策影响。