Bhaumik Soumyadeep
Meta-research and Evidence Synthesis Unit, Health Systems Science, The George Institute for Global Health, Sydney, Australia.
Faculty of Medicine and Health, UNSW Sydney, Kensington, Australia.
PLOS Glob Public Health. 2025 Mar 19;5(3):e0004348. doi: 10.1371/journal.pgph.0004348. eCollection 2025.
Artificial intelligence (AI) is increasingly being used in the field of medicine and healthcare. However, there are no articles specifically examining ethical and moral dimensions of AI use for evidence synthesis. This article attempts to fills this gap. In doing so, I deploy in written form, what in Bengali philosophy and culture, is the Adda (আড্ডা) approach, a form of oral exchange, which involves deep but conversational style discussion. Adda developed as a form of intellectual resistance against the cultural hegemony of British Imperialism and entails asking provocative question to encourage critical discourse.The raison d'être for using AI is that it would enhance efficiency in the conduct of evidence synthesis, thus leading to greater evidence uptake. I question whether assuming so without any empirical evidence is ethical. I then examine the challenges posed by the lack of moral agency of AI; the issue of bias and discrimination being amplified through AI driven evidence synthesis; ethical and moral dimensions of epistemic (knowledge-related) uncertainty on AI; impact of knowledge systems (training of future scientists, and epistemic conformity), and the need for looking at ethical and moral dimensions beyond technical evaluation of AI models. I then discuss ethical and moral responsibilities of government, multi-laterals, research institutions and funders in regulating and having an oversight role in development, validation, and conduct of evidence synthesis. I argue that industry self-regulation for responsible use of AI is unlikely to address ethical and moral concerns, and that there is a need to develop legal frameworks, ethics codes, and of bringing such work within the ambit of institutional ethics committees to enable appreciation of the complexities around use of AI for evidence synthesis, mitigate against moral hazards, and ensure that evidence synthesis leads to improvement of health of individuals, nations and societies.
人工智能(AI)在医学和医疗保健领域的应用越来越广泛。然而,目前尚无专门探讨将AI用于证据综合的伦理和道德层面的文章。本文试图填补这一空白。在此过程中,我采用书面形式运用了孟加拉哲学和文化中的“阿dda”(আড্ডা)方法,这是一种口头交流形式,涉及深入但又具有对话风格的讨论。“阿dda”是作为对英国帝国主义文化霸权的一种知识抵抗形式发展而来的,它需要提出挑衅性问题以鼓励批判性话语。使用AI的理由是它将提高证据综合的效率,从而带来更多的证据采纳。我质疑在没有任何实证证据的情况下如此假设是否符合伦理。然后,我审视了AI缺乏道德能动性所带来的挑战;通过AI驱动的证据综合而被放大的偏差和歧视问题;AI在认知(与知识相关)不确定性方面的伦理和道德层面;知识系统的影响(对未来科学家的培训以及认知一致性),以及超越对AI模型的技术评估来审视伦理和道德层面的必要性。接着,我讨论了政府、多边机构、研究机构和资助者在规范以及对证据综合的开发、验证和实施进行监督方面的伦理和道德责任。我认为行业对AI的负责任使用进行自我监管不太可能解决伦理和道德问题,并且有必要制定法律框架、道德准则,并将此类工作纳入机构伦理委员会的范畴,以便能够认识到围绕将AI用于证据综合的复杂性,减轻道德风险,并确保证据综合能够改善个人、国家和社会的健康状况。