• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

信任、可信度与人工智能治理。

Trust, trustworthiness and AI governance.

作者信息

Lahusen Christian, Maggetti Martino, Slavkovik Marija

机构信息

Department of Social Sciences, Universität Siegen, 57068, Siegen, Germany.

Université de Lausanne, Institute of Political Studies, CH-1015, Lausanne, Switzerland.

出版信息

Sci Rep. 2024 Sep 5;14(1):20752. doi: 10.1038/s41598-024-71761-0.

DOI:10.1038/s41598-024-71761-0
PMID:39237635
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11377768/
Abstract

An emerging issue in AI alignment is the use of artificial intelligence (AI) by public authorities, and specifically the integration of algorithmic decision-making (ADM) into core state functions. In this context, the alignment of AI with the values related to the notions of trust and trustworthiness constitutes a particularly sensitive problem from a theoretical, empirical, and normative perspective. In this paper, we offer an interdisciplinary overview of the scholarship on trust in sociology, political science, and computer science anchored in artificial intelligence. On this basis, we argue that only a coherent and comprehensive interdisciplinary approach making sense of the different properties attributed to trust and trustworthiness can convey a proper understanding of complex watchful trust dynamics in a socio-technical context. Ensuring the trustworthiness of AI-Governance ultimately requires an understanding of how to combine trust-related values while addressing machines, humans and institutions at the same time. We offer a road-map of the steps that could be taken to address the challenges identified.

摘要

人工智能对齐领域中一个新出现的问题是公共当局对人工智能(AI)的使用,特别是将算法决策(ADM)整合到核心国家职能中。在这种背景下,从理论、实证和规范的角度来看,人工智能与信任和可信赖性相关价值观的对齐构成了一个特别敏感的问题。在本文中,我们对社会学、政治学和以人工智能为基础的计算机科学领域中关于信任的学术研究进行了跨学科概述。在此基础上,我们认为,只有一种连贯且全面的跨学科方法,能够理解赋予信任和可信赖性的不同属性,才能在社会技术背景下正确理解复杂的警惕性信任动态。确保人工智能治理的可信赖性最终需要理解如何在同时处理机器、人类和机构的情况下,将与信任相关的价值观结合起来。我们提供了一个路线图,说明可以采取哪些步骤来应对所识别出的挑战。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d838/11377768/14a4ddacf8bc/41598_2024_71761_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d838/11377768/14a4ddacf8bc/41598_2024_71761_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d838/11377768/14a4ddacf8bc/41598_2024_71761_Fig1_HTML.jpg

相似文献

1
Trust, trustworthiness and AI governance.信任、可信度与人工智能治理。
Sci Rep. 2024 Sep 5;14(1):20752. doi: 10.1038/s41598-024-71761-0.
2
How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.欧盟人工智能法案如何寻求建立一个可信赖的认知环境。
Asian Bioeth Rev. 2024 Jun 24;16(3):345-372. doi: 10.1007/s41649-024-00304-6. eCollection 2024 Jul.
3
Intentional machines: A defence of trust in medical artificial intelligence.有意机器:对医疗人工智能的信任辩护。
Bioethics. 2022 Feb;36(2):154-161. doi: 10.1111/bioe.12891. Epub 2021 Jun 18.
4
Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences.信任与可信人工智能:环境科学领域人工智能的研究议程
Risk Anal. 2024 Jun;44(6):1498-1513. doi: 10.1111/risa.14245. Epub 2023 Nov 8.
5
Artificial intelligence and clinical decision support: clinicians' perspectives on trust, trustworthiness, and liability.人工智能与临床决策支持:临床医生对信任、可信度和责任的看法。
Med Law Rev. 2023 Nov 27;31(4):501-520. doi: 10.1093/medlaw/fwad013.
6
Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.值得信赖的人工智能和道德设计:公众对基于人工智能的决策支持工具在产时护理背景下的可信度的看法。
BMC Med Ethics. 2023 Jun 20;24(1):42. doi: 10.1186/s12910-023-00917-w.
7
Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk.可信人工智能与欧盟人工智能法案:论可信度与风险可接受性的 conflation(此处conflation可结合语境意译为“混淆”等,因无更多背景较难准确翻译,保留英文供进一步理解)
Regul Gov. 2024 Jan;18(1):3-32. doi: 10.1111/rego.12512. Epub 2023 Feb 6.
8
[How trustworthy is artificial intelligence? : A model for the conflict between objectivity and subjectivity].[人工智能有多可信?:客观性与主观性冲突的一种模式]
Inn Med (Heidelb). 2023 Nov;64(11):1051-1057. doi: 10.1007/s00108-023-01602-1. Epub 2023 Sep 22.
9
Public governance of medical artificial intelligence research in the UK: an integrated multi-scale model.英国医学人工智能研究的公共治理:一种综合多尺度模型。
Res Involv Engagem. 2022 May 21;8(1):21. doi: 10.1186/s40900-022-00357-7.
10
Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics.谁来负责?通过信任和道德视角看美国公众对人工智能治理的看法。
Public Underst Sci. 2024 Jul;33(5):654-672. doi: 10.1177/09636625231224592. Epub 2024 Feb 7.

引用本文的文献

1
Regulating genome language models: navigating policy challenges at the intersection of AI and genetics.规范基因组语言模型:应对人工智能与遗传学交叉领域的政策挑战
Hum Genet. 2025 Sep 16. doi: 10.1007/s00439-025-02768-4.

本文引用的文献

1
Trust and distrust in interorganisational relations-Scale development.组织间关系中的信任与不信任——量表开发。
PLoS One. 2022 Dec 16;17(12):e0279231. doi: 10.1371/journal.pone.0279231. eCollection 2022.
2
From fair predictions to just decisions? Conceptualizing algorithmic fairness and distributive justice in the context of data-driven decision-making.从合理预测到公正决策?在数据驱动决策背景下对算法公平性和分配正义进行概念化
Front Sociol. 2022 Oct 10;7:883999. doi: 10.3389/fsoc.2022.883999. eCollection 2022.
3
Accountable Artificial Intelligence: Holding Algorithms to Account.
可问责的人工智能:追究算法的责任。
Public Adm Rev. 2021 Sep-Oct;81(5):825-836. doi: 10.1111/puar.13293. Epub 2020 Nov 11.
4
IEEE P7001: A Proposed Standard on Transparency.IEEE P7001:一项关于透明度的拟议标准。
Front Robot AI. 2021 Jul 26;8:665729. doi: 10.3389/frobt.2021.665729. eCollection 2021.
5
Trust in Artificial Intelligence: Meta-Analytic Findings.对人工智能的信任:元分析研究结果。
Hum Factors. 2023 Mar;65(2):337-359. doi: 10.1177/00187208211013988. Epub 2021 May 28.