Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia.
Syst Rev. 2022 Jul 15;11(1):142. doi: 10.1186/s13643-022-02012-4.
In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics' views on HCAI addresses key issues in AI ethics and governance.
We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics' or patients' views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies.
Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics' and patients' views on HCAI, with the aim of determining the extent to which publics' views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics' values and perspectives.
近年来,人工智能(AI)的创新导致了新的医疗保健 AI(HCAI)技术的发展。虽然这些技术中的一些显示出改善患者体验的潜力,但伦理学家警告说,AI 可能会在医疗保健中引入和加剧伤害和错误。HCAI 反映对人们重要的价值观非常重要。然而,由于对 HCAI 技术的认识相对有限,让患者和公众参与有关 AI 伦理的研究仍然具有挑战性。本范围审查旨在绘制现有关于公众对 HCAI 的看法的文献如何解决 AI 伦理和治理中的关键问题。
我们开发了一个搜索查询,以对 PubMed、Scopus、Web of Science、CINAHL 和 Academic Search Complete 自 2010 年 1 月以来的文献进行全面搜索。我们将包括记录公众或患者对机器学习 HCAI 技术的看法的原始研究。设计了一个编码框架,用于从文章中捕获定性和定量数据。两位审稿人将对纳入的部分文章进行编码,如果有任何差异,将在团队中进行讨论,并相应地对编码框架进行修改。最终结果将以定量和定性的方式报告,检查纳入的研究如何解决每个 AI 伦理问题。
咨询公众和患者对 HCAI 技术和创新的伦理看法,可以为那些寻求以合乎道德和合法的方式实施 HCAI 的人提供重要的见解。本审查将探讨在审查公众和患者对 HCAI 的看法的文献中如何解决伦理问题,目的是确定公众对 HCAI 伦理的看法在现有研究中得到了多大程度的解决。这有可能支持制定将公众的价值观和观点纳入其中的 HCAI 实施流程和监管。