Katirai Amelia
Osaka University, Japan.
Autism. 2025 Mar;29(3):554-565. doi: 10.1177/13623613241279704. Epub 2024 Sep 16.
The use of emotion recognition technologies in the workplace is expanding. These technologies claim to provide insights into internal emotional states based on external cues like facial expressions. Despite interconnections between autism and the development of emotion recognition technologies as reported in prior research, little attention has been paid to the particular issues that arise for autistic individuals when emotion recognition technologies are implemented in consequential settings like the workplace. This article examines recent literature on autism and on emotion recognition technologies to argue that the risks of the use of emotion recognition technologies in the workplace are heightened for autistic people. Following a brief overview of emotion recognition technologies, this argument is made by focusing on the issues that arise through the development and deployment of emotion recognition technologies. Issues related to the development of emotion recognition technologies include fundamental problems with the science behind the technologies, the underrepresentation of autistic individuals in data sets and the problems with increasing this representation, and annotation of the training data for the technologies. Issues related to implementation include the invasive nature of emotion recognition technologies, the sensitivity of the data used, and the imposition of neurotypical norms on autistic workers through their use. The article closes with a call for future research on the implications of these emergent technologies for autistic individuals.Lay abstractTechnologies using artificial intelligence to recognize people's emotional states are increasingly being developed under the name of emotional recognition technologies. Emotion recognition technologies claim to identify people's emotional states based on data, like facial expressions. This is despite research providing counterevidence that emotion recognition technologies are founded on bad science and that it is not possible to correctly identify people's emotions in this way. The use of emotion recognition technologies is widespread, and they can be harmful when they are used in the workplace, especially for autistic workers. Although previous research has shown that the origins of emotion recognition technologies relied on autistic people, there has been little research on the impact of emotion recognition technologies on autistic people when it is used in the workplace. Through a review of recent academic studies, this article looks at the development and implementation processes of emotion recognition technologies to show how autistic people in particular may be disadvantaged or harmed by the development and use of the technologies. This article closes with a call for more research on autistic people's perception of the technologies and their impact, with involvement from diverse participants.
工作场所中情感识别技术的应用正在不断扩展。这些技术声称能够基于面部表情等外部线索洞察内部情绪状态。尽管先前研究报告了自闭症与情感识别技术发展之间的关联,但在工作场所等实际环境中应用情感识别技术时,针对自闭症个体所产生的特殊问题却鲜有关注。本文通过审视近期关于自闭症和情感识别技术的文献,认为在工作场所使用情感识别技术给自闭症患者带来的风险更高。在对情感识别技术进行简要概述之后,通过聚焦情感识别技术的开发与部署所引发的问题来阐述这一观点。与情感识别技术开发相关的问题包括技术背后科学的根本问题、数据集中自闭症个体代表性不足以及增加这一代表性的问题,还有技术训练数据的标注问题。与实施相关的问题包括情感识别技术的侵入性、所用数据的敏感性,以及通过使用这些技术将神经典型规范强加于自闭症工作者身上。文章最后呼吁对这些新兴技术给自闭症个体带来的影响展开未来研究。
以情感识别技术之名,利用人工智能识别人们情绪状态的技术正在日益发展。情感识别技术声称能基于面部表情等数据识别人们的情绪状态。尽管有研究提供了反证,表明情感识别技术基于糟糕的科学,且无法以这种方式正确识别人们的情绪。情感识别技术的应用十分广泛,当其在工作场所使用时可能会造成危害,尤其是对自闭症工作者。尽管先前研究表明情感识别技术的起源依赖于自闭症患者,但对于情感识别技术在工作场所使用时对自闭症患者的影响却鲜有研究。通过回顾近期学术研究,本文审视了情感识别技术的开发与实施过程,以展示自闭症患者如何尤其可能因这些技术的开发和使用而处于不利地位或受到伤害。本文最后呼吁开展更多研究,让不同参与者参与进来,了解自闭症患者对这些技术的认知及其影响。