Suppr超能文献

ThoughtSource:大型语言模型推理数据的中央枢纽。

ThoughtSource: A central hub for large language model reasoning data.

机构信息

Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria.

Section for Cognitive Systems, Technical University of Denmark, Lyngby, Denmark.

出版信息

Sci Data. 2023 Aug 8;10(1):528. doi: 10.1038/s41597-023-02433-3.

Abstract

Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to 'hallucinate' facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.

摘要

大型语言模型(LLM),如 GPT-4,最近在广泛的任务中展示了令人印象深刻的结果。然而,LLM 仍然存在局限性,例如它们在复杂推理方面经常失败,推理过程不透明,容易“产生幻觉”事实,并且存在对其潜在偏见的担忧。让模型将推理步骤以自然语言表达出来,一种称为思维链提示(chain-of-thought prompting)的技术,最近被提出作为解决这些问题的一种方法。在这里,我们提出了 ThoughtSource,这是一个用于思维链(CoT)推理的元数据集和软件库。ThoughtSource 的目标是通过促进对 CoT 的定性理解、实现实证评估和提供训练数据来改进未来的人工智能系统。ThoughtSource 的这个首次发布版本整合了七个科学/医学、三个通用领域和五个数学单词问答数据集。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b5dc/10409727/8f083758dd2a/41597_2023_2433_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验