人工智能在胃肠病学中的伦理问题:副驾还是机长?

Ethical Implications of Artificial Intelligence in Gastroenterology: The Co-pilot or the Captain?

机构信息

Department of Internal Medicine, William Beaumont University Hospital, Royal Oak, MI, USA.

Clinical & Translational Epidemiology Unit, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.

出版信息

Dig Dis Sci. 2024 Aug;69(8):2727-2733. doi: 10.1007/s10620-024-08557-9. Epub 2024 Jul 15.

Abstract

Though artificial intelligence (AI) is being widely implemented in gastroenterology (GI) and hepatology and has the potential to be paradigm shifting for clinical practice, its pitfalls must be considered along with its advantages. Currently, although the use of AI is limited in practice to supporting clinical judgment, medicine is rapidly heading toward a global environment where AI will be increasingly autonomous. Broader implementation of AI will require careful ethical considerations, specifically related to bias, privacy, and consent. Widespread use of AI raises concerns related to increasing rates of systematic errors, potentially due to bias introduced in training datasets. We propose that a central repository for collection and analysis for training and validation datasets is essential to overcoming potential biases. Since AI does not have built-in concepts of bias and equality, humans involved in AI development and implementation must ensure its ethical use and development. Moreover, ethical concerns regarding data ownership and health information privacy are likely to emerge, obviating traditional methods of obtaining patient consent that cover all possible uses of patient data. The question of liability in case of adverse events related to use of AI in GI must be addressed among the physician, the healthcare institution, and the AI developer. Though the future of AI in GI is very promising, herein we review the ethical considerations in need of additional guidance informed by community experience and collective expertise.

摘要

尽管人工智能(AI)在胃肠病学(GI)和肝脏病学中得到了广泛的应用,并有可能改变临床实践模式,但在考虑其优势的同时,也必须考虑其缺陷。目前,尽管 AI 在实践中仅被用于支持临床判断,但医学正迅速朝着一个全球环境发展,在这个环境中,AI 将越来越自主。更广泛地实施 AI 需要仔细考虑伦理问题,特别是与偏见、隐私和同意有关的问题。AI 的广泛应用引发了人们对系统错误率增加的担忧,这可能是由于训练数据集中存在的偏差所导致的。我们提出,建立一个用于训练和验证数据集的收集和分析的中央存储库,对于克服潜在偏差至关重要。由于 AI 没有内置的偏见和公平概念,因此参与 AI 开发和实施的人类必须确保其以道德的方式使用和开发。此外,数据所有权和健康信息隐私方面的伦理问题很可能会出现,从而排除了传统的获得涵盖患者数据所有可能用途的患者同意的方法。在与使用 AI 相关的不良事件相关的责任问题上,医生、医疗机构和 AI 开发者之间必须达成一致。尽管 AI 在 GI 中的未来前景非常广阔,但我们在此审查了需要更多指导的伦理问题,这些指导是由社区经验和集体专业知识提供的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索