Suppr超能文献

人道主义危机中个人数据处理与人工智能的伦理影响:一项范围综述

Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review.

作者信息

Kreutzer Tino, Orbinski James, Appel Lora, An Aijun, Marston Jerome, Boone Ella, Vinck Patrick

机构信息

Kobo, Cambridge, MA, 02139, USA.

The Montreal Children's Hospital, McGill University Health Centre, Montreal, QC, H4A 3J1, Canada.

出版信息

BMC Med Ethics. 2025 Apr 15;26(1):49. doi: 10.1186/s12910-025-01189-2.

Abstract

BACKGROUND

Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises.

METHODS

We systematically searched databases to identify peer-reviewed studies published since 2010. Data and findings were standardized, grouping ethical issues into the value categories of autonomy, beneficence, non-maleficence, and justice. The study protocol followed Arksey and O'Malley's approach and PRISMA reporting guidelines.

RESULTS

We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.

CONCLUSIONS

Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations' autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices.

TRIAL REGISTRATION

Not applicable.

摘要

背景

人道主义组织正在迅速扩大数据的使用,以追求行动在有效性和效率方面的提升。伦理风险,尤其是来自人工智能(AI)数据处理的风险,日益受到关注,但当前的人道主义数据保护指南对此处理不足。本研究报告了一项范围审查,该审查梳理了学术文献中关于受人道主义危机影响人群数据处理所引发的一系列伦理问题。

方法

我们系统地检索数据库,以识别自2010年以来发表的同行评审研究。对数据和研究结果进行标准化处理,将伦理问题归类为自主、行善、不伤害和公正这几个价值类别。研究方案遵循阿克西和奥马利的方法以及PRISMA报告指南。

结果

我们识别出16200条独特记录,并保留了218项相关研究。近三分之一(n = 66)的研究讨论了与人工智能相关的技术。17项研究有来自中低收入国家的作者,4项研究有来自低收入国家的作者。我们识别出22个伦理问题,然后将其按照自主、行善、不伤害和公正这四个伦理价值类别进行分组。略超过一半(n = 113)的纳入研究基于实际案例识别出伦理问题。被引用最多的伦理问题(n =

134)是担心在个人或敏感数据可能被无意分享给第三方的情况下的隐私问题。除了人工智能,这些研究中最常讨论的技术包括社交媒体、众包和地图工具。

结论

研究凸显了重大担忧,即人道主义背景下的数据处理可能造成额外伤害、可能无法带来直接益处、可能限制受影响人群的自主性,并可能导致稀缺资源的不公平分配。用于人道主义援助的人工智能工具部署的增加加剧了这些担忧。需要紧急制定具体、全面的指南、培训和审计方法来应对这些伦理挑战。此外,来自受人道主义危机影响尤为严重的低收入和中等收入国家的实证研究,对于确保包容和多样的观点至关重要。这项研究应关注新兴人工智能系统以及既定的人道主义数据管理实践的伦理影响。

试验注册

不适用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0847/11998222/4390916f137d/12910_2025_1189_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验