Suppr超能文献

机器学习与环境正义:剖析一种用于预测加利福尼亚州饮用水水质的算法方法。

Machine learning for environmental justice: Dissecting an algorithmic approach to predict drinking water quality in California.

机构信息

University of California Berkeley, Energy and Resources Group, Berkeley, California, United States.

University of California Berkeley, Environmental Science, Policy, and Management, Berkeley, California, United States; University of California Berkeley, School of Public Health, Berkeley, California, United States.

出版信息

Sci Total Environ. 2024 Nov 15;951:175730. doi: 10.1016/j.scitotenv.2024.175730. Epub 2024 Aug 24.

Abstract

The potential for machine learning to answer questions of environmental science, monitoring, and regulatory enforcement is evident, but there is cause for concern regarding potential embedded bias: algorithms can codify discrimination and exacerbate systematic gaps. This paper, organized into two halves, underscores the importance of vetting algorithms for bias when used for questions of environmental science and justice. In the first half, we present a case study of using machine learning for environmental justice-motivated research: prediction of drinking water quality. While performance varied across models and contaminants, some performed well. Multiple models had overall accuracy rates at or above 90 % and F2 scores above 0.60 on their respective test sets. In the second half, we dissect this algorithmic approach to examine how modeling decisions affect modeling outcomes - and not only how these decisions change whether the model is correct or incorrect, but for whom. We find that multiple decision points in the modeling process can lead to different predictive outcomes. More importantly, we find that these choices can result in significant differences in demographic characteristics of false negatives. We conclude by proposing a set of practices for researchers and policy makers to follow (and improve upon) when applying machine learning to questions of environmental science, management, and justice.

摘要

机器学习在回答环境科学、监测和监管执法问题方面的潜力是显而易见的,但人们有理由担心潜在的嵌入偏见:算法可以编纂歧视行为,并加剧系统性差距。本文分为两部分,强调了在环境科学和正义问题上使用算法时对其进行偏见审查的重要性。在第一部分中,我们展示了一个使用机器学习进行环境正义相关研究的案例:饮用水质量预测。虽然不同模型和污染物的性能有所不同,但有些模型表现良好。多个模型在各自的测试集中的整体准确率达到或高于 90%,F2 分数高于 0.60。在第二部分中,我们剖析了这种算法方法,以研究建模决策如何影响建模结果——不仅是这些决策如何改变模型的对错,还包括对谁产生影响。我们发现,建模过程中的多个决策点可能会导致不同的预测结果。更重要的是,我们发现这些选择可能导致假阴性人群的特征存在显著差异。最后,我们提出了一套实践方法,供研究人员和政策制定者在将机器学习应用于环境科学、管理和正义问题时遵循(并加以改进)。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验