Fisher Sarah A
School of English, Communication and Philosophy, Cardiff University, Cardiff, UK.
Ethics Inf Technol. 2024;26(4):67. doi: 10.1007/s10676-024-09802-5. Epub 2024 Oct 4.
Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are , generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.
新近强大的大语言模型已突然出现在人们视野中,其应用涵盖广泛的功能。我们现在可以预期会越来越频繁地大量接触到它们的输出结果。一些评论家声称,大语言模型在不顾及真相的情况下生成令人信服的输出。如果此言属实,那将使大语言模型成为极其危险的话语参与者。胡说八道者不仅破坏了真实性规范(通过说出虚假之事),还破坏了真理本身的规范地位(通过将其视为完全无关紧要)。那么,大语言模型真的会胡说八道吗?我认为它们能够做到,即在回应寻求事实的提示时发布命题内容,而无需事先评估该内容的真假。然而,我进一步认为,在有适当防护措施的情况下,它们不会胡说八道。所以,就像人类说话者一样,大语言模型胡说八道的倾向取决于其自身的特定构成。