Suppr超能文献

不知道更安全?塑造责任法和政策以激励食品系统中预测性人工智能技术的采用。

Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system.

作者信息

Alexander Carrie S, Smith Aaron, Ivanek Renata

机构信息

Socioeconomics and Ethics, Artificial Intelligence in the Food System (AIFS), University of California, Davis, Davis, CA, United States.

Agricultural and Resource Economics, University of California, Davis, Davis, CA, United States.

出版信息

Front Artif Intell. 2023 Dec 8;6:1298604. doi: 10.3389/frai.2023.1298604. eCollection 2023.

Abstract

Governments, researchers, and developers emphasize creating "trustworthy AI," defined as AI that prevents bias, ensures data privacy, and generates reliable results that perform as expected. However, in some cases problems arise not when AI is trustworthy, technologically, but when it . This article focuses on such problems in the food system. AI technologies facilitate the generation of masses of data that may illuminate existing food-safety and employee-safety risks. These systems may collect incidental data that could be used, or may be designed specifically, to assess and manage risks. The predictions and knowledge generated by these data and technologies may increase company liability and expense, and discourage adoption of these predictive technologies. Such problems may extend beyond the food system to other industries. Based on interviews and literature, this article discusses vulnerabilities to liability and obstacles to technology adoption that arise, arguing that "trustworthy AI" cannot be achieved through technology alone, but requires social, cultural, political, as well as technical cooperation. Implications for law and further research are also discussed.

摘要

政府、研究人员和开发者强调创建“可信赖的人工智能”,即能够防止偏见、确保数据隐私并产生如预期般可靠结果的人工智能。然而,在某些情况下,问题并非出在人工智能在技术上不可信赖时,而是出在它…… 本文关注食品系统中的此类问题。人工智能技术有助于生成大量数据,这些数据可能揭示现有的食品安全和员工安全风险。这些系统可能收集到可用于或专门设计用于评估和管理风险的附带数据。这些数据和技术所产生的预测和知识可能会增加公司的责任和成本,并阻碍这些预测技术的采用。此类问题可能不仅限于食品系统,还会延伸到其他行业。基于访谈和文献,本文讨论了出现的责任漏洞和技术采用障碍,认为“可信赖的人工智能”不能仅通过技术实现,还需要社会、文化、政治以及技术合作。还讨论了对法律和进一步研究的影响。

相似文献

3
Ethics and governance of trustworthy medical artificial intelligence.可信医疗人工智能的伦理与治理。
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
10
How should we regulate artificial intelligence?我们应该如何规范人工智能?
Philos Trans A Math Phys Eng Sci. 2018 Sep 13;376(2128). doi: 10.1098/rsta.2017.0360.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验