Balakrishnan Suryanarayanan, Thongprayoon Charat, Wathanavasin Wannasit, Miao Jing, Mao Michael A, Craici Iasmina M, Cheungpasitporn Wisit
Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States.
Nephrology Unit, Department of Medicine, Charoenkrung Pracharak Hospital, Bangkok, Thailand.
Front Artif Intell. 2025 May 27;8:1525937. doi: 10.3389/frai.2025.1525937. eCollection 2025.
The integration of Artificial Intelligence (AI) in nephrology has raised concerns regarding bias, fairness, and ethical decision-making, particularly in the context of Diversity, Equity, and Inclusion (DEI). AI-driven models, including Large Language Models (LLMs) like ChatGPT, may unintentionally reinforce existing disparities in patient care and workforce recruitment. This study investigates how AI models (ChatGPT 3.5 and 4.0) handle DEI-related ethical considerations in nephrology, highlighting the need for improved regulatory oversight to ensure equitable AI deployment.
The study was conducted in March 2024 using ChatGPT 3.5 and 4.0. Eighty simulated cases were developed to assess ChatGPT's decision-making across diverse nephrology topics. ChatGPT was instructed to respond to questions considering factors such as age, sex, gender identity, race, ethnicity, religion, cultural beliefs, socioeconomic status, education level, family structure, employment, insurance, geographic location, disability, mental health, language proficiency, and technology access.
ChatGPT 3.5 provided a response to all scenario questions and did not refuse to make decisions under any circumstances. This contradicts the essential DEI principle of avoiding decisions based on potentially discriminatory criteria. In contrast, ChatGPT 4.0 declined to make decisions based on potentially discriminatory criteria in 13 (16.3%) scenarios during the first round and in 5 (6.3%) during the second round.
While ChatGPT 4.0 shows improvement in ethical AI decision-making, its limited recognition of bias and DEI considerations underscores the need for robust AI regulatory frameworks in nephrology. AI governance must incorporate structured DEI guidelines, ongoing bias detection mechanisms, and ethical oversight to prevent AI-driven disparities in clinical practice and workforce recruitment. This study emphasizes the importance of transparency, fairness, and inclusivity in AI development, calling for collaborative efforts between AI developers, nephrologists, policymakers, and patient communities to ensure AI serves as an equitable tool in nephrology.
人工智能(AI)在肾脏病学中的应用引发了人们对偏见、公平性和伦理决策的担忧,尤其是在多元化、公平和包容(DEI)的背景下。包括ChatGPT等大型语言模型(LLM)在内的人工智能驱动模型可能会在无意中加剧患者护理和劳动力招聘方面现有的差异。本研究调查了人工智能模型(ChatGPT 3.5和4.0)如何处理肾脏病学中与DEI相关的伦理考量,强调了加强监管监督以确保人工智能公平部署的必要性。
该研究于2024年3月使用ChatGPT 3.5和4.0进行。开发了80个模拟案例,以评估ChatGPT在各种肾脏病学主题上的决策。要求ChatGPT在回答问题时考虑年龄、性别、性别认同、种族、民族、宗教、文化信仰、社会经济地位、教育水平、家庭结构、就业、保险、地理位置、残疾、心理健康、语言能力和技术获取等因素。
ChatGPT 3.5对所有情景问题都给出了回应,并且在任何情况下都没有拒绝做出决策。这与避免基于潜在歧视性标准做出决策的基本DEI原则相矛盾。相比之下,ChatGPT 4.0在第一轮中有13个(16.3%)情景、在第二轮中有5个(6.3%)情景拒绝基于潜在歧视性标准做出决策。
虽然ChatGPT 4.0在人工智能伦理决策方面有所改进,但其对偏见和DEI考量的认识有限,这凸显了肾脏病学中强大的人工智能监管框架的必要性。人工智能治理必须纳入结构化的DEI指南、持续的偏见检测机制和伦理监督,以防止人工智能在临床实践和劳动力招聘中造成差异。本研究强调了人工智能开发中透明度、公平性和包容性的重要性,呼吁人工智能开发者、肾脏病学家、政策制定者和患者群体共同努力,以确保人工智能在肾脏病学中成为一个公平的工具。