TNO, Human Machine Teaming, Soesterberg, NL, The Netherlands.
Donders Centre for Neuroscience, Nijmegen, Gelderland, NL, The Netherlands.
J Public Health (Oxf). 2023 Aug 28;45(3):689-696. doi: 10.1093/pubmed/fdad005.
Intelligent artificial agents ('agents') have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making?
To address this question, an experimental testbed was developed in which a human participant and an agent collaboratively conduct triage on patients during a pandemic crisis. The agent uses data to support the human by providing advice and extra information about the patients. In one condition, the agent provided sound advice; the agent in the other condition gave biased advice. The research question was whether participants neutralized bias from the biased artificial agent.
Although it was an exploratory study, the data suggest that human participants may not be sufficiently in control to correct the agent's bias.
This research shows how important it is to design and test for human control in concrete human-machine collaboration contexts. It suggests that insufficient human control can potentially result in people being unable to detect biases in machines and thus unable to prevent machine biases from affecting decisions.
智能人工智能代理(“代理”)已经出现在人类社会的各个领域(医疗、法律、社会)。由于使用智能代理可能会导致偏见,因此一个常见的建议是让人类参与其中。这是否足以确保决策没有偏见?
为了解决这个问题,开发了一个实验测试平台,在这个平台上,人类参与者和代理在大流行危机期间共同对患者进行分诊。代理使用数据通过提供有关患者的建议和额外信息来支持人类。在一种情况下,代理提供了合理的建议;而另一种情况下,代理提供了有偏见的建议。研究问题是参与者是否能够消除有偏见的人工智能代理的偏见。
尽管这是一项探索性研究,但数据表明,人类参与者可能无法充分控制代理的偏见。
这项研究表明,在具体的人机协作环境中,设计和测试人类控制是多么重要。它表明,人类控制不足可能会导致人们无法检测机器中的偏见,从而无法防止机器偏见影响决策。