Institute of Literature in Chinese Medicine, Nanjing University of Chinese Medicine, Nanjing, 210023, China.
Nantong University Xinglin College, Nantong, 226236, China.
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring.
We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects.
Medical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients' and doctors' trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors 'and patients' autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people's trust in medical AI.
In order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication.
人工智能(AI)在医疗保健领域的应用日益广泛,为传统的诊断和治疗带来了技术突破,但同时也伴随着许多风险和挑战。这些负面影响也被视为伦理问题,影响着医疗 AI 的可信度,需要通过识别、预测和监测来管理。
我们采用多学科方法,总结了影响医疗 AI 可信度的五个因素:数据质量、算法偏差、不透明性、安全性和保密性以及责任归属,并从技术、法律和医疗保健利益相关者和机构的角度讨论了这些因素。我们使用伦理价值观-伦理原则-伦理规范的伦理框架,从伦理、法律和监管方面为值得信赖的医疗 AI 提出相应的伦理治理对策。
医疗数据主要是非结构化的,缺乏统一和标准化的标注,数据质量将直接影响医疗 AI 算法模型的质量。算法偏差会影响 AI 临床预测并加剧健康差距。算法的不透明性会影响患者和医生对医疗 AI 的信任,算法错误或安全漏洞会对患者造成重大风险和危害。医疗 AI 参与临床实践可能会威胁到医生和患者的自主权和尊严。当医疗 AI 发生事故时,责任归属不明确。所有这些因素都会影响人们对医疗 AI 的信任。
为了使医疗 AI 值得信赖,在伦理层面上,应首先考虑促进人类健康的伦理价值取向作为顶层设计。在法律层面上,当前的医疗 AI 没有道德地位,人类仍然是责任承担者。在监管层面上,应加强数据质量管理,提高算法透明度和可追溯性以减少算法偏差,并对 AI 产业的全过程进行监管和审查,以控制风险。还需要鼓励多方讨论和评估 AI 风险和社会影响,并加强国际合作与交流。