Ng Madelena Y, Helzer Jarrod, Pfeffer Michael A, Seto Tina, Hernandez-Boussard Tina
Department of Medicine, Stanford University, Stanford, CA 94305, United States.
Technology and Digital Solutions, Stanford Health Care, Stanford, CA 94305, United States.
J Am Med Inform Assoc. 2025 Mar 1;32(3):586-588. doi: 10.1093/jamia/ocaf005.
Generative AI, particularly large language models (LLMs), holds great potential for improving patient care and operational efficiency in healthcare. However, the use of LLMs is complicated by regulatory concerns around data security and patient privacy. This study aimed to develop and evaluate a secure infrastructure that allows researchers to safely leverage LLMs in healthcare while ensuring HIPAA compliance and promoting equitable AI.
We implemented a private Azure OpenAI Studio deployment with secure API-enabled endpoints for researchers. Two use cases were explored, detecting falls from electronic health records (EHR) notes and evaluating bias in mental health prediction using fairness-aware prompts.
The framework provided secure, HIPAA-compliant API access to LLMs, allowing researchers to handle sensitive data safely. Both use cases highlighted the secure infrastructure's capacity to protect sensitive patient data while supporting innovation.
This centralized platform presents a scalable, secure, and HIPAA-compliant solution for healthcare institutions aiming to integrate LLMs into clinical research.
生成式人工智能,尤其是大语言模型(LLMs),在改善医疗保健中的患者护理和运营效率方面具有巨大潜力。然而,围绕数据安全和患者隐私的监管问题使大语言模型的使用变得复杂。本研究旨在开发和评估一种安全的基础设施,使研究人员能够在医疗保健中安全地利用大语言模型,同时确保符合《健康保险流通与责任法案》(HIPAA)并促进公平的人工智能。
我们为研究人员实施了一个具有安全的启用API端点的私有Azure OpenAI Studio部署。探索了两个用例,即从电子健康记录(EHR)笔记中检测跌倒以及使用公平感知提示评估心理健康预测中的偏差。
该框架提供了对大语言模型的安全、符合HIPAA的API访问,使研究人员能够安全地处理敏感数据。两个用例都突出了安全基础设施在保护敏感患者数据的同时支持创新的能力。
这个集中式平台为旨在将大语言模型集成到临床研究中的医疗机构提供了一个可扩展、安全且符合HIPAA的解决方案。