Suppr超能文献

人工智能与科研中的理解错觉。

Artificial intelligence and illusions of understanding in scientific research.

机构信息

Department of Anthropology, Yale University, New Haven, CT, USA.

Department of Psychology, Princeton University, Princeton, NJ, USA.

出版信息

Nature. 2024 Mar;627(8002):49-58. doi: 10.1038/s41586-024-07146-0. Epub 2024 Mar 6.

Abstract

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists' visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community's ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.

摘要

科学家们正热衷于想象人工智能 (AI) 工具可能改善研究的方式。为什么 AI 工具如此吸引人,以及在整个研究过程中实施它们存在哪些风险?在这里,我们对科学家对 AI 的愿景进行了分类,观察到它们的吸引力来自于通过克服人类的缺点来提高生产力和客观性的承诺。但是,拟议的 AI 解决方案也可以利用我们的认知局限性,使我们容易产生理解的错觉,即我们相信我们对世界的了解比实际情况更多。这种错觉掩盖了科学界发现科学单一文化形成的能力,在这种文化中,某些类型的方法、问题和观点开始主导替代方法,使科学缺乏创新性,更容易出现错误。人工智能工具在科学中的广泛应用可能会引入一个科学研究阶段,在这个阶段,我们的产出更多,但理解更少。通过分析这些工具的吸引力,我们为人工智能时代负责任的知识生产的讨论提供了一个框架。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验