Schlund Rachel, Zitek Emily M
ILR School, Department of Organizational Behavior, Cornell University, Ithaca, NY, USA.
Commun Psychol. 2024 Jun 6;2(1):53. doi: 10.1038/s44271-024-00102-8.
Past research indicates that people tend to react adversely to surveillance, but does it matter if advanced technologies such as artificial intelligence conduct surveillance rather than humans? Across four experiments (Study 1, N = 107; Study 2, N = 157; Study 3, N = 117; Study 4, N = 814), we examined how participants reacted to monitoring and evaluation by human or algorithmic surveillance when recalling instances of surveillance from their lives (Study 1), generating ideas (Studies 2 and 3), or imagining working in a call center (Study 4). Our results revealed that participants subjected to algorithmic (v. human) surveillance perceived they had less autonomy (Studies 1, 3, and 4), criticized the surveillance more (Studies 1-3), performed worse (Studies 2 and 3), and reported greater intentions to resist (Studies 1 and 4). Framing the purpose of the algorithmic surveillance as developmental, and thus informational, as opposed to evaluative, mitigated the perception of decreased autonomy and level of resistance (Study 4).
过去的研究表明,人们往往会对监视产生负面反应,但如果是人工智能等先进技术进行监视而非人类,这有关系吗?在四项实验中(研究1,N = 107;研究2,N = 157;研究3,N = 117;研究4,N = 814),我们考察了参与者在回忆生活中的监视实例(研究1)、产生想法(研究2和3)或想象在呼叫中心工作(研究4)时,对人类或算法监视的监测与评估会作何反应。我们的结果显示,接受算法(相对于人类)监视的参与者觉得自己的自主权降低了(研究1、3和4),对监视的批评更多(研究1 - 3),表现更差(研究2和3),并表示有更强的抵制意愿(研究1和4)。将算法监视的目的设定为发展性的,从而是提供信息的,而非评估性的,减轻了对自主权降低和抵制程度的感知(研究4)。