Savoldi Beatrice, Bastings Jasmijn, Bentivogli Luisa, Vanmassenhove Eva
Fondazione Bruno Kessler, Trento, Italy.
Google DeepMind, Amsterdam, the Netherlands.
Patterns (N Y). 2025 May 2;6(6):101257. doi: 10.1016/j.patter.2025.101257. eCollection 2025 Jun 13.
Gender bias in machine translation (MT) has been studied for over a decade, a time marked by societal, linguistic, and technological shifts. With the early optimism for a quick solution in mind, we review over 100 studies on the topic and uncover a more complex reality-one that resists a simple technical fix. While we identify key trends and advancements, persistent gaps remain. We argue that there is no simple technical solution to bias. Building on insights from our review, we examine the growing prominence of large language models and discuss the challenges and opportunities they present in the context of gender bias and translation. By doing so, we hope to inspire future work in the field to break with past limitations and to be less focused on a technical fix; more user-centric, multilingual, and multiculturally diverse; more personalized; and better grounded in real-world needs.
机器翻译(MT)中的性别偏见问题已经研究了十多年,这一时期社会、语言和技术都发生了变化。带着对快速解决问题的早期乐观态度,我们回顾了100多项关于该主题的研究,发现了一个更为复杂的现实——一个难以通过简单的技术手段解决的现实。虽然我们确定了关键趋势和进展,但持续存在的差距依然存在。我们认为,不存在解决偏见问题的简单技术方案。基于我们回顾得出的见解,我们审视了大语言模型日益突出的地位,并讨论了它们在性别偏见和翻译背景下带来的挑战与机遇。通过这样做,我们希望激发该领域未来的工作,打破过去的局限,减少对技术解决方案的关注;更加以用户为中心、多语言和多元文化;更具个性化;并更好地基于现实世界的需求。