Korteling J E Hans, van de Boer-Visschedijk G C, Blankendaal R A M, Boonekamp R C, Eikelboom A R
TNO Human Factors, Soesterberg, Netherlands.
Front Artif Intell. 2021 Mar 25;4:622364. doi: 10.3389/frai.2021.622364. eCollection 2021.
AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and "collaborate" with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI "partners" with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying 'psychological' mechanisms of AI. So, in order to obtain well-functioning human-AI systems, in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.
人工智能是当今最具争议的话题之一,对于人类智能与人工智能的异同,似乎几乎没有共识。关于许多相关主题的讨论,如可信度、可解释性和伦理等,都具有隐含的人类中心主义和拟人化观念,例如,将追求类人智能作为人工智能的黄金标准。为了达成更多共识并充实未来可能的研究目标,本文提出了关于人类智能与人工智能异同的三个观点:1)人类(及人工)智能的基本限制;2)人类智能是通用智能多种可能形式之一;3)多种(集成)形式的狭义混合人工智能应用的高潜在影响。目前,人工智能系统在认知品质和能力上与生物系统有着根本不同。因此,一个最突出的问题是我们如何尽可能有效地使用这些系统(并与它们“协作”)?对于哪些任务、在何种条件下,将决策安全地交给人工智能是可行的,何时又需要人类判断?我们如何利用人类智能和人工智能的特定优势?如何有效部署人工智能系统以补充和弥补人类认知的固有局限(反之亦然)?我们应该追求开发具有人类(同等水平)智能的人工智能“伙伴”,还是应更多地专注于补充人类的局限性?为了回答这些问题,在工作场所或政策制定中与人工智能系统合作的人必须构建一个关于人工智能潜在“心理”机制的适当心智模型。所以,为了获得运行良好的人机系统,人类方面的因素应得到更有力的关注。为此,本文提出了一个教育内容的初步框架。