Jackson Brian R, Rashidi Hooman H, Lennerz Jochen K, de Baca M E
From the Department of Pathology, University of Utah, Salt Lake City (Jackson).
the Computational Pathology & AI Center of Excellence, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (Rashidi).
Arch Pathol Lab Med. 2025 Feb 1;149(2):123-129. doi: 10.5858/arpa.2024-0205-RA.
CONTEXT.—: Technology companies and research groups are increasingly exploring applications of generative artificial intelligence (GenAI) in pathology and laboratory medicine. Although GenAI holds considerable promise, it also introduces novel risks for patients, communities, professionals, and the scientific process.
OBJECTIVE.—: To summarize the current frameworks for the ethical development and management of GenAI within health care settings.
DATA SOURCES.—: The analysis draws from scientific journals, organizational websites, and recent guidelines on artificial intelligence ethics and regulation.
CONCLUSIONS.—: The literature on the ethical management of artificial intelligence in medicine is extensive but is still in its nascent stages because of the evolving nature of the technology. Effective and ethical integration of GenAI requires robust processes and shared accountability among technology vendors, health care organizations, regulatory bodies, medical professionals, and professional societies. As the technology continues to develop, a multifaceted ecosystem of safety mechanisms and ethical oversight is crucial to maximize benefits and mitigate risks.
科技公司和研究团队越来越多地探索生成式人工智能(GenAI)在病理学和检验医学中的应用。尽管GenAI前景广阔,但它也给患者、社区、专业人员和科学进程带来了新的风险。
总结医疗环境中GenAI伦理发展与管理的当前框架。
该分析借鉴了科学期刊、组织网站以及近期关于人工智能伦理与监管的指南。
关于医学中人工智能伦理管理的文献丰富,但由于技术不断发展,仍处于初期阶段。GenAI的有效且符合伦理的整合需要强大的流程以及技术供应商、医疗保健组织、监管机构、医学专业人员和专业协会之间的共同责任。随着技术不断发展,一个多方面的安全机制和伦理监督生态系统对于最大化利益和降低风险至关重要。