Suppr超能文献

在教育中使用人工智能的周期性伦理影响。

The cyclical ethical effects of using artificial intelligence in education.

作者信息

Dieterle Edward, Dede Chris, Walker Michael

机构信息

Educational Testing Service, Washington, DC USA.

Harvard University, Cambridge, MA USA.

出版信息

AI Soc. 2022 Sep 27:1-11. doi: 10.1007/s00146-022-01497-w.

Abstract

Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethical effects of and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes. We then analyze the upstream divides that feed into and fuel the algorithmic divide, first investigating (who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms) and then (the factors making data either representative of the total population or over-representative of a subpopulation's preferences, thereby preventing objectivity and biasing understandings and outcomes). After that, we analyze the divides that are downstream of the algorithmic divide associated with (how learners, educators, and others understand the outputs of algorithms and use them to make decisions) and (how the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes). At present, lacking ongoing reflection and action by learners, educators, educational leaders, designers, scholars, and policymakers, the five divides collectively create a vicious cycle and perpetuate structural biases in teaching and learning. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education. We conclude the article by looking forward and discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement.

摘要

我们对人工智能在教育中的伦理和影响相关及相关文献的综合综述揭示了与获取、代表性、算法、解释和公民身份相关的五个性质不同但相互关联的分歧。我们通过探究人工智能工具和技术在用于为教学决策建模和提供信息以及预测学习成果时的伦理影响,以及人类团队如何规划并减轻偏差来展开分析。然后,我们分析了导致并加剧算法分歧的上游分歧,首先调查获取分歧(谁能够以及谁无法获取与人工智能增强的数字学习工具和平台互动所需的硬件、软件和网络连接),接着调查代表性分歧(使数据要么代表总人口,要么过度代表亚群体偏好的因素,从而妨碍客观性并使理解和结果产生偏差)。之后,我们分析算法分歧下游与解释分歧(学习者、教育工作者和其他人如何理解算法输出并利用其进行决策)和公民身份分歧(其他分歧如何累积影响学习者、教育工作者和其他人对数据的解释,进而影响行为,并随着时间的推移影响技能、文化、经济、健康和公民成果)相关的分歧。目前,由于学习者、教育工作者、教育领导者、设计师、学者和政策制定者缺乏持续的反思和行动,这五个分歧共同形成了一个恶性循环,并使教学中的结构性偏差长期存在。然而,增强人类对这些分歧的责任和控制可以创造一个良性循环,从而改善教育中的多样性、公平性和包容性。我们通过展望未来并讨论通过渐进式改进循环减轻偏差以增加所有人的教育机会和成效的方法来结束本文。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/623c/9513289/6166178d26ce/146_2022_1497_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验