Cohen I Glenn
Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics, Harvard Law School.
Am J Bioeth. 2023 Oct;23(10):8-16. doi: 10.1080/15265161.2023.2233357. Epub 2023 Jul 13.
In the last several months, several major disciplines have started their initial reckoning with what ChatGPT and other Large Language Models (LLMs) mean for them - law, medicine, business among other professions. With a heavy dose of humility, given how fast the technology is moving and how uncertain its social implications are, this article attempts to give some early tentative thoughts on what ChatGPT might mean for bioethics. I will first argue that many bioethics issues raised by ChatGPT are similar to those raised by current medical AI - built into devices, decision support tools, data analytics, etc. These include issues of data ownership, consent for data use, data representativeness and bias, and privacy. I describe how these familiar issues appear somewhat differently in the ChatGPT context, but much of the existing bioethical thinking on these issues provides a strong starting point. There are, however, a few "new-ish" issues I highlight - by new-ish I mean issues that while perhaps not truly new seem much more important for it than other forms of medical AI. These include issues about informed consent and the right to know we are dealing with an AI, the problem of medical deepfakes, the risk of oligopoly and inequitable access related to foundational models, environmental effects, and on the positive side opportunities for the democratization of knowledge and empowering patients. I also discuss how races towards dominance (between large companies and between the U.S. and geopolitical rivals like China) risk sidelining ethics.
在过去几个月里,几个主要学科已经开始初步思考ChatGPT和其他大语言模型(LLMs)对它们意味着什么——法律、医学、商业以及其他职业。鉴于这项技术发展如此迅速,其社会影响又如此不确定,本文带着极大的谦逊,试图就ChatGPT对生物伦理学可能意味着什么给出一些早期的初步想法。我首先会指出,ChatGPT引发的许多生物伦理学问题与当前医疗人工智能引发的问题类似——医疗人工智能被内置在设备、决策支持工具、数据分析等之中。这些问题包括数据所有权、数据使用的同意、数据代表性和偏差以及隐私。我将描述这些常见问题在ChatGPT背景下是如何呈现出一些不同之处的,但现有的关于这些问题的生物伦理思考提供了一个强有力的起点。然而,我要强调几个“有点新”的问题——所谓有点新,我的意思是这些问题虽然可能并非真正全新,但对ChatGPT而言似乎比其他形式的医疗人工智能更为重要。这些问题包括关于知情同意以及知晓我们正在与人工智能打交道的权利的问题、医学深度伪造的问题、与基础模型相关的寡头垄断和不公平获取的风险、环境影响,以及从积极方面来看知识民主化和增强患者权能的机会。我还将讨论竞相占据主导地位(在大公司之间以及在美国和中国等地缘政治对手之间)如何可能使伦理靠边站。