Chu Bianca, Modi Natansh D, Menz Bradley D, Bacchi Stephen, Kichenadasse Ganessan, Paterson Catherine, Kovoor Joshua G, Ramsey Imogen, Logan Jessica M, Wiese Michael D, McKinnon Ross A, Rowland Andrew, Sorich Michael J, Hopkins Ashley M
Flinders Health and Medical Research Institute, College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia.
Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States.
Front Public Health. 2025 May 9;13:1584348. doi: 10.3389/fpubh.2025.1584348. eCollection 2025.
Generative artificial intelligence (AI) is advancing rapidly; an important consideration is the public's increasing ability to customise foundational AI models to create publicly accessible applications tailored for specific tasks. This study aims to evaluate the accessibility and functionality descriptions of customised GPTs on the OpenAI GPT store that provide health-related information or assistance to patients and healthcare professionals.
We conducted a cross-sectional observational study of the OpenAI GPT store from September 2 to 6, 2024, to identify publicly accessible customised GPTs with health-related functions. We searched across general medicine, psychology, oncology, cardiology, and immunology applications. Identified GPTs were assessed for their name, description, intended audience, and usage. Regulatory status was checked across the U.S. Food and Drug Administration (FDA), European Union Medical Device Regulation (EU MDR), and Australian Therapeutic Goods Administration (TGA) databases.
A total of 1,055 customised, health-related GPTs targeting patients and healthcare professionals were identified, which had collectively been used in over 360,000 conversations. Of these, 587 were psychology-related, 247 were in general medicine, 105 in oncology, 52 in cardiology, 30 in immunology, and 34 in other health specialties. Notably, 624 of the identified GPTs included healthcare professional titles (e.g., doctor, nurse, psychiatrist, oncologist) in their names and/or descriptions, suggesting they were taking on such roles. None of the customised GPTs identified were FDA, EU MDR, or TGA-approved.
This study highlights the rapid emergence of publicly accessible, customised, health-related GPTs. The findings raise important questions about whether current AI medical device regulations are keeping pace with rapid technological advancements. The results also highlight the potential "role creep" in AI chatbots, where publicly accessible applications begin to perform - or claim to perform - functions traditionally reserved for licensed professionals, underscoring potential safety concerns.
生成式人工智能(AI)正在迅速发展;一个重要的考虑因素是公众越来越有能力定制基础AI模型,以创建针对特定任务的公开可用应用程序。本研究旨在评估OpenAI GPT商店中为患者和医疗保健专业人员提供健康相关信息或帮助的定制GPT的可访问性和功能描述。
我们于2024年9月2日至6日对OpenAI GPT商店进行了一项横断面观察研究,以识别具有健康相关功能的公开可用定制GPT。我们在普通医学、心理学、肿瘤学、心脏病学和免疫学应用中进行了搜索。对识别出的GPT进行了名称、描述、目标受众和用途的评估。在美国食品药品监督管理局(FDA)、欧盟医疗器械法规(EU MDR)和澳大利亚治疗用品管理局(TGA)数据库中检查了监管状态。
共识别出1055个针对患者和医疗保健专业人员的定制健康相关GPT,它们总共被用于超过360,000次对话。其中,587个与心理学相关,247个属于普通医学,105个在肿瘤学领域,52个在心脏病学领域,30个在免疫学领域,34个在其他健康专业领域。值得注意的是,在识别出的GPT中,有624个在其名称和/或描述中包含医疗保健专业头衔(如医生、护士、精神科医生、肿瘤学家),这表明它们在扮演此类角色。所识别的定制GPT均未获得FDA、EU MDR或TGA的批准。
本研究突出了公开可用的、定制的、健康相关GPT的迅速出现。这些发现引发了关于当前AI医疗器械法规是否跟上快速技术进步步伐的重要问题。结果还突出了AI聊天机器人中潜在的“角色 creep”,即公开可用的应用程序开始执行——或声称执行——传统上由持牌专业人员保留的功能,这凸显了潜在的安全问题。