Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China.
Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China.
Nat Commun. 2022 Jun 2;13(1):3094. doi: 10.1038/s41467-022-30761-2.
The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of "weak or narrow AI" to that of "strong or generalized AI".
人工智能(AI)的根本目标是模仿人类的核心认知活动。尽管在 AI 研究中取得了巨大的成功,但现有的大多数方法仅具有单一的认知能力。为了克服这一局限性,并朝着人工通用智能(AGI)迈出坚实的一步,我们开发了一种基于大规模多模态数据进行预训练的基础模型,该模型可以快速适应各种下游认知任务。为了实现这一目标,我们提出通过使用从互联网上抓取的弱语义相关性数据进行自监督学习来预训练我们的基础模型,并表明可以在广泛的下游任务中获得有前途的结果。特别是,通过开发的模型可解释性工具,我们证明了我们的基础模型现在具有强大的想象力能力。我们相信,我们的工作朝着 AGI 迈出了变革性的一步,从我们通常的“弱或狭义 AI”实践转变为“强或广义 AI”。