Luz de Araujo Pedro Henrique, Roth Benjamin
Faculty of Computer Science, University of Vienna, Vienna, Austria.
Doctoral School Computer Science, Faculty of Computer Science, Vienna, Austria.
PLoS One. 2025 Jun 30;20(6):e0325664. doi: 10.1371/journal.pone.0325664. eCollection 2025.
One way to steer generations from large language models (LLM) is to assign a persona: a role that describes how the user expects the LLM to behave (e.g., a helpful assistant, a teacher, a woman). This paper investigates how personas affect diverse aspects of model behavior. We assign to seven LLMs 162 personas from 12 categories spanning variables like gender, sexual orientation, and occupation. We prompt them to answer questions from five datasets covering objective (e.g., questions about math and history) and subjective tasks (e.g., questions about beliefs and values). We also compare persona's generations to two baseline settings: a control persona setting with 30 paraphrases of "a helpful assistant" to control for models' prompt sensitivity, and an empty persona setting where no persona is assigned. We find that for all models and datasets, personas show greater variability than the control setting and that some measures of persona behavior generalize across models.
引导大型语言模型(LLM)生成不同内容的一种方法是赋予其一个角色:即描述用户期望该LLM如何表现的一种角色(例如,一个乐于助人的助手、一位教师、一名女性)。本文研究了角色如何影响模型行为的各个方面。我们为七个LLM分配了162个来自12个类别的角色,这些类别涵盖了性别、性取向和职业等变量。我们促使它们回答来自五个数据集的问题,这些数据集涵盖客观任务(例如,关于数学和历史的问题)和主观任务(例如,关于信仰和价值观的问题)。我们还将角色生成的内容与两种基线设置进行比较:一种是控制角色设置,用30种“乐于助人的助手”的释义来控制模型对提示的敏感度;另一种是空角色设置,即不分配任何角色。我们发现,对于所有模型和数据集,角色表现出比控制设置更大的变异性,并且角色行为的一些度量在不同模型间具有普遍性。