Glickman Moshe, Sharot Tali
Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK.
Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK.
Nat Hum Behav. 2025 Feb;9(2):345-359. doi: 10.1038/s41562-024-02077-2. Epub 2024 Dec 18.
Artificial intelligence (AI) technologies are rapidly advancing, enhancing human capabilities across various fields spanning from finance to medicine. Despite their numerous advantages, AI systems can exhibit biased judgements in domains ranging from perception to emotion. Here, in a series of experiments (n = 1,401 participants), we reveal a feedback loop where human-AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans. This amplification is significantly greater than that observed in interactions between humans, due to both the tendency of AI systems to amplify biases and the way humans perceive AI systems. Participants are often unaware of the extent of the AI's influence, rendering them more susceptible to it. These findings uncover a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones.
人工智能(AI)技术正在迅速发展,提升了从金融到医学等各个领域的人类能力。尽管具有诸多优势,但人工智能系统在从感知到情感的各个领域都可能表现出有偏差的判断。在此,我们通过一系列实验(n = 1401名参与者)揭示了一个反馈回路,即人机交互会改变人类感知、情感和社会判断的潜在过程,随后放大人类的偏差。由于人工智能系统放大偏差的倾向以及人类对人工智能系统的认知方式,这种放大效应显著大于人类之间交互中观察到的情况。参与者往往没有意识到人工智能影响的程度,这使他们更容易受到影响。这些发现揭示了一种机制,即人工智能系统放大偏差,而人类会进一步将其内化,引发一种雪球效应,使小的判断错误升级为大得多的错误。