Winder Philipp, Hildebrand Christian, Hartmann Jochen
Institute of Behavioral Science & Technology, University of St. Gallen, St. Gallen, Switzerland.
TUM School of Management, Technical University of Munich, Munich, Bavaria, Germany.
PLoS One. 2025 Jun 27;20(6):e0325459. doi: 10.1371/journal.pone.0325459. eCollection 2025.
Large language models are increasingly used by private investors seeking financial advice. The current paper examines the potential of these models to perpetuate investment biases and affect the economic security of individuals at scale. We provide a systematic assessment of how large language models used for investment advice shape the portfolio risks of private investors. We offer a comprehensive model of large language model investment advice risk, examining five key dimensions of portfolio risks (geographical cluster risk, sector cluster risk, trend chasing risk, active investment allocation risk, and total expense risk). We demonstrate across four studies that large language models used for investment advice induce increased portfolio risks across all five risk dimensions, and that a range of debiasing interventions only partially mitigate these risks. Our findings show that large language models exhibit similar "cognitive" biases as human investors, reinforcing existing investment biases inherent in their training data. These findings have important implications for private investors, policymakers, artificial intelligence developers, financial institutions, and the responsible development of large language models in the financial sector.
寻求财务建议的私人投资者越来越多地使用大型语言模型。本文研究了这些模型延续投资偏见并大规模影响个人经济安全的可能性。我们对用于投资建议的大型语言模型如何塑造私人投资者的投资组合风险进行了系统评估。我们提供了一个大型语言模型投资建议风险的综合模型,考察了投资组合风险的五个关键维度(地理集群风险、行业集群风险、趋势追逐风险、主动投资配置风险和总费用风险)。我们通过四项研究表明,用于投资建议的大型语言模型在所有五个风险维度上都会导致投资组合风险增加,并且一系列去偏干预措施只能部分缓解这些风险。我们的研究结果表明,大型语言模型表现出与人类投资者类似的“认知”偏见,强化了其训练数据中固有的现有投资偏见。这些发现对私人投资者、政策制定者、人工智能开发者、金融机构以及金融领域大型语言模型的负责任发展具有重要意义。