Suppr超能文献

人工智能中的工程偏差

Engineering Bias in AI.

作者信息

Weber Cynthia

出版信息

IEEE Pulse. 2019 Jan-Feb;10(1):15-17. doi: 10.1109/MPULS.2018.2885857.

Abstract

After working at Apple designing circuits and signal processing algorithms for products including the first iPad, Timnit Gebru (Figure 1) received her Ph.D. from the Stanford Artificial Intelligence Laboratory in the area of computer vision. She recently completed a postdoc with Microsoft Research in the FATE (Fairness, Transparency, Accountability, and Ethics in Artificial Intelligence (AI)) group, was a cofounder of Black in AI, and is currently working as a research scientist in the Ethical AI team at Google. Her research in algorithmic bias and the ethical implications of data mining have appeared in multiple publications, including The New York Times and The Economist. IEEE Pulse recently spoke with Gebru about the role societal bias plays in engineering AI, the deficits and dangers in the field caused by limited diversity, and the challenges inherent in addressing these complex issues.

摘要

蒂姆尼特·格布鲁(图1)曾在苹果公司工作,为包括第一代iPad在内的产品设计电路和信号处理算法,之后她在斯坦福大学人工智能实验室获得了计算机视觉领域的博士学位。她最近在微软研究院的FATE(人工智能中的公平、透明、问责和伦理)团队完成了博士后工作,是“黑人参与人工智能”组织的联合创始人之一,目前在谷歌的伦理人工智能团队担任研究科学家。她在算法偏见和数据挖掘的伦理影响方面的研究成果已发表在包括《纽约时报》和《经济学人》在内的多家出版物上。《IEEE Pulse》最近采访了格布鲁,探讨了社会偏见在人工智能工程中所起的作用、有限的多样性给该领域带来的不足和危险,以及解决这些复杂问题所固有的挑战。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验