信任、可信度与医学人工智能的未来:跨学科专家研讨会成果
Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop.
作者信息
Goisauf Melanie, Cano Abadía Mónica, Akyüz Kaya, Bobowicz Maciej, Buyx Alena, Colussi Ilaria, Fritzsche Marie-Christine, Lekadir Karim, Marttinen Pekka, Mayrhofer Michaela Th, Meszaros Janos
机构信息
Department of ELSI Services and Research, Biobanking and Biomolecular Resources Research Infrastructure Consortium, Graz, Austria.
2nd Department of Radiology, Gdańsk Medical University, Gdansk, Poland.
出版信息
J Med Internet Res. 2025 Jun 2;27:e71236. doi: 10.2196/71236.
Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle-from conception, technical development, clinical validation, and real-world deployment.
可信赖性已成为人工智能(AI)在医学领域进行伦理发展与应用的关键概念。各种指南已制定出关键原则,如公平性、稳健性和可解释性,将其作为实现可信赖人工智能的基本要素。然而,可信赖人工智能的概念通常强调技术要求和计算解决方案,常常忽视公平性的更广泛方面以及潜在偏差。这些方面不仅包括算法偏差,还包括人为、机构、社会和社会因素,这些因素对于构建在伦理上合理且具有社会责任感的人工智能系统至关重要。这篇观点文章提出了一种跨学科方法,用于分析医学背景下对人工智能和可信赖人工智能的信任,重点关注:(1)社会科学和人文学科对信任的概念化以及法律视角;(2)它们对医疗保健中可信赖人工智能的影响。它关注医学领域中在理论讨论中常常未得到充分体现的现实世界挑战,以提出一种更注重实践的理解。通过与参与医学人工智能开发和应用的各学科专家(特别是肿瘤影像学和基因组学领域)举办的跨学科研讨会收集了见解,并辅以与人工智能信任相关的理论方法。结果强调,除了偏差和公平性等常见问题外,知识和人的参与对于可信赖人工智能至关重要。在人工智能的整个生命周期中让利益相关者参与进来被证明至关重要,这支持了一个以人为本、多中心的可信赖人工智能实施框架。研究结果强调,对医学人工智能的信任取决于提供有意义的、以用户为导向的信息,并在知识与可接受的不确定性之间取得平衡。专家们强调了对工具功能有信心的重要性,特别是它能按预期运行。可信赖性被证明不是一个特征,而是一个涉及人类、他们的专业知识以及人工智能工具运行所处的更广泛社会或机构背景的关系过程。信任是动态的,由个人、技术和机构之间的互动塑造,最终以人而非仅仅工具为中心。工具是根据可靠性和可信度来评估的,但信任从根本上依赖于人与人之间的联系。这篇文章强调开发不仅在技术上合理而且在伦理上稳健并被终端用户广泛接受的人工智能工具,这有助于实现更有效和公平的人工智能介导的医疗保健。研究结果突出表明,在医疗保健中建立人工智能的可信赖性需要一种以人为本、多利益相关者的方法,进行多样化和包容性的参与。为促进公平,我们建议人工智能开发团队在人工智能生命周期的每个阶段,从概念构思、技术开发、临床验证到实际应用,都让所有相关利益者参与进来。
相似文献
J Med Internet Res. 2025-6-2
Cochrane Database Syst Rev. 2025-6-20
Autism Adulthood. 2025-5-28
Open Res Eur. 2025-5-6
Autism Adulthood. 2025-5-28
Autism Adulthood. 2025-5-28
本文引用的文献
N Biotechnol. 2025-3-25
NPJ Digit Med. 2024-8-12
Front Genet. 2023-1-26
PLOS Digit Health. 2022-3-31
J Clin Nurs. 2023-8