Suppr超能文献

可解释人工智能(XAI)用于解释输入到野火易感性预测模型中的影响因素。

Explainable artificial intelligence (XAI) for interpreting the contributing factors feed into the wildfire susceptibility prediction model.

机构信息

Fenner School of Environment & Society, College of Science, The Australian National University, Canberra, ACT, Australia; Centre for Advanced Modelling and Geospatial Information Systems, School of Civil and Environmental Engineering, Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW 2007, Australia.

Centre for Advanced Modelling and Geospatial Information Systems, School of Civil and Environmental Engineering, Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia.

出版信息

Sci Total Environ. 2023 Jun 25;879:163004. doi: 10.1016/j.scitotenv.2023.163004. Epub 2023 Mar 24.

Abstract

One of the worst environmental catastrophes that endanger the Australian community is wildfire. To lessen potential fire threats, it is helpful to recognize fire occurrence patterns and identify fire susceptibility in wildfire-prone regions. The use of machine learning (ML) algorithms is acknowledged as one of the most well-known methods for addressing non-linear issues like wildfire hazards. It has always been difficult to analyze these multivariate environmental disasters because modeling can be influenced by a variety of sources of uncertainty, including the quantity and quality of training procedures and input variables. Moreover, although ML techniques show promise in this field, they are unstable for a number of reasons, including the usage of irrelevant descriptor characteristics when developing the models. Explainable AI (XAI) can assist us in acquiring insights into these constraints and, consequently, modifying the modeling approach and training data necessary. In this research, we describe how a Shapley additive explanations (SHAP) model can be utilized to interpret the results of a deep learning (DL) model that is developed for wildfire susceptibility prediction. Different contributing factors such as topographical, landcover/vegetation, and meteorological factors are fed into the model and various SHAP plots are used to identify which parameters are impacting the prediction model, their relative importance, and the reasoning behind specific decisions. The findings drawn from SHAP plots show the significant contributions made by factors such as humidity, wind speed, rainfall, elevation, slope, and normalized difference moisture index (NDMI) to the suggested model's output for wildfire susceptibility mapping. We infer that developing an explainable model would aid in comprehending the model's decision to map wildfire susceptibility, pinpoint high-contributing components in the prediction model, and consequently control fire hazards effectively.

摘要

在威胁澳大利亚社区的最严重的环境灾难之一是野火。为了降低潜在的火灾威胁,识别火灾发生模式并确定野火多发地区的火灾易感性是很有帮助的。机器学习(ML)算法的使用被认为是解决野火危害等非线性问题的最著名方法之一。由于建模可能受到多种不确定性来源的影响,包括训练程序和输入变量的数量和质量,因此分析这些多变量环境灾害一直具有挑战性。此外,尽管 ML 技术在该领域显示出前景,但由于多种原因,它们并不稳定,包括在开发模型时使用不相关的描述符特征。可解释人工智能(XAI)可以帮助我们了解这些限制,并相应地修改建模方法和训练数据。在这项研究中,我们描述了如何使用 Shapley 加法解释(SHAP)模型来解释为野火易感性预测而开发的深度学习(DL)模型的结果。不同的贡献因素,如地形、土地覆盖/植被和气象因素,被输入到模型中,并且使用各种 SHAP 图来确定哪些参数正在影响预测模型,它们的相对重要性以及特定决策的背后原因。从 SHAP 图中得出的结论表明,湿度、风速、降雨量、海拔、坡度和归一化差异水分指数(NDMI)等因素对建议模型的野火易感性映射输出做出了重大贡献。我们推断,开发一个可解释的模型将有助于理解模型对野火易感性进行映射的决策,确定预测模型中高贡献的组成部分,并因此有效地控制火灾危险。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验