Yogarajan Vithya, Dobbie Gillian, Keegan Te Taka
School of Computer Science, University of Auckland, Auckland, New Zealand.
School of Computing and Mathematical Sciences, University of Waikato, Hamilton, New Zealand.
J R Soc N Z. 2024 Sep 16;55(2):372-395. doi: 10.1080/03036758.2024.2398567. eCollection 2025.
Large language models (LLMs) are powerful decision-making tools widely adopted in healthcare, finance, and transportation. Embracing the opportunities and innovations of LLMs is inevitable. However, LLMs inherit stereotypes, misrepresentations, discrimination, and societies' biases from various sources-including training data, algorithm design, and user interactions-resulting in concerns about equality, diversity, and fairness. The bias problem has triggered increased research towards defining, detecting and quantifying bias and developing debiasing techniques. The predominant focus in tackling the bias problem is skewed towards resource-rich regions such as the US and Europe, resulting in a scarcity of research in other societies. As a small country with a unique history, culture and social composition, there is an opportunity for Aotearoa New Zealand's (NZ) research community to address this inadequacy. This paper presents an experimental evaluation of existing bias metrics and debiasing techniques in the NZ context. Research gaps derived from the study and a literature review are outlined, current and ongoing research in this space are discussed, and the suggested scope of research opportunities for NZ are presented.
大型语言模型(LLMs)是在医疗保健、金融和交通领域广泛采用的强大决策工具。拥抱大型语言模型带来的机遇和创新是不可避免的。然而,大型语言模型从包括训练数据、算法设计和用户交互在内的各种来源继承了刻板印象、错误表述、歧视和社会偏见,引发了对平等、多样性和公平性的担忧。偏差问题引发了对定义、检测和量化偏差以及开发去偏技术的研究增加。解决偏差问题的主要重点偏向于美国和欧洲等资源丰富的地区,导致其他社会的研究稀缺。作为一个拥有独特历史、文化和社会构成的小国,新西兰(NZ)的研究界有机会解决这一不足。本文对新西兰背景下现有的偏差度量和去偏技术进行了实验评估。概述了该研究和文献综述得出的研究差距,讨论了该领域当前和正在进行的研究,并提出了新西兰建议的研究机会范围。