Bozeman Joe F, Hollauer Catharina, Ramshankar Arjun Thangaraj, Nakkasunchi Shalini, Jambeck Jenna, Hicks Andrea, Bilec Melissa, McCauley Darren, Heidrich Oliver
Civil & Environmental Engineering Georgia Institute of Technology Atlanta Georgia USA.
Public Policy Georgia Institute of Technology Atlanta Georgia USA.
J Ind Ecol. 2024 Dec;28(6):1362-1376. doi: 10.1111/jiec.13509. Epub 2024 Jun 18.
Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle -from conception to implementation-with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision-makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system-such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain-specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains.
最近有人呼吁将公平工具和框架整合到整个研究和设计生命周期中——从概念构思到实施——重点是减少人工智能(AI)和机器学习(ML)应用中的不公平现象。然而,对于工业生态学(IE)的研究人员、从业者和决策者试图采用公平做法而言,仅仅宣称应将公平贯穿始终,仍有许多不足之处。在这篇论坛文章中,我们采用批判性综述的方法,通过利用食品系统来解释社会生态不公平现象是如何在ML应用的整个生命周期阶段出现的。我们举例说明了如何使用一份综合问卷来界定数据偏差、算法偏差以及选择和部署偏差等类别中的不公平ML偏差。最后,我们提供了综合指导和量身定制的策略,以帮助解决IE应用中AI/ML的不公平偏差和不公平问题。具体而言,这些指导和工具有助于应对敏感性、可靠性和不确定性挑战。此外,还讨论了AI/ML中的偏差和不公平现象如何影响除食品系统之外的其他IE研究和设计领域,如生活实验室和循环性。我们最后解释了IE为解决AI/ML中的不公平偏差和不公平现象应采取的未来方向。最后,我们呼吁将系统性公平融入整个IE应用中,以便从根本上理解特定领域的社会生态不公平现象,识别ML中潜在的不公平性,并以一种能在不同研究领域通用的方式选择缓解策略。