Gevaert Caroline M, Carman Mary, Rosman Benjamin, Georgiadou Yola, Soden Robert
Department of Earth Observation Science, Faculty ITC, University of Twente, Enschede, Overijssel 7514AE, the Netherlands.
Department of Philosophy, Faculty of Humanities, University of the Witwatersrand, Johannesburg, Gauteng 2000, South Africa.
Patterns (N Y). 2021 Nov 12;2(11):100363. doi: 10.1016/j.patter.2021.100363.
Disaster risk management (DRM) seeks to help societies prepare for, mitigate, or recover from the adverse impacts of disasters and climate change. Core to DRM are disaster risk models that rely heavily on geospatial data about the natural and built environments. Developers are increasingly turning to artificial intelligence (AI) to improve the quality of these models. Yet, there is still little understanding of how the extent of hidden geospatial biases affects disaster risk models and how accountability relationships are affected by these emerging actors and methods. In many cases, there is also a disconnect between the algorithm designers and the communities where the research is conducted or algorithms are implemented. This perspective highlights emerging concerns about the use of AI in DRM. We discuss potential concerns and illustrate what must be considered from a data science, ethical, and social perspective to ensure the responsible usage of AI in this field.
灾害风险管理(DRM)旨在帮助社会做好应对灾害和气候变化不利影响的准备、减轻影响或从中恢复。灾害风险模型是灾害风险管理的核心,这些模型严重依赖有关自然和人造环境的地理空间数据。开发者越来越多地转向人工智能(AI)来提高这些模型的质量。然而,对于隐藏的地理空间偏差的程度如何影响灾害风险模型,以及这些新兴行为者和方法如何影响问责关系,人们仍然知之甚少。在许多情况下,算法设计者与开展研究或实施算法的社区之间也存在脱节。这一观点凸显了对在灾害风险管理中使用人工智能的新担忧。我们讨论了潜在的担忧,并说明了从数据科学、伦理和社会角度必须考虑的因素,以确保在该领域负责任地使用人工智能。