Suppr超能文献

连续构造深度神经网络

Continuously Constructive Deep Neural Networks.

作者信息

Irsoy Ozan, Alpaydin Ethem

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Apr;31(4):1124-1133. doi: 10.1109/TNNLS.2019.2918225. Epub 2019 Jun 24.

Abstract

Traditionally, deep learning algorithms update the network weights, whereas the network architecture is chosen manually using a process of trial and error. In this paper, we propose two novel approaches that automatically update the network structure while also learning its weights. The novelty of our approach lies in our parameterization, where the depth, or additional complexity, is encapsulated continuously in the parameter space through control parameters that add additional complexity. We propose two methods. In tunnel networks, this selection is done at the level of a hidden unit, and in budding perceptrons, this is done at the level of a network layer; updating this control parameter introduces either another hidden unit or layer. We show the effectiveness of our methods on the synthetic two-spiral data and on three real data sets of MNIST, MIRFLICKR, and CIFAR, where we see that our proposed methods, with the same set of hyperparameters, can correctly adjust the network complexity to the task complexity.

摘要

传统上,深度学习算法更新网络权重,而网络架构是通过反复试验的过程手动选择的。在本文中,我们提出了两种新颖的方法,它们在学习权重的同时自动更新网络结构。我们方法的新颖之处在于我们的参数化,其中深度或额外的复杂度通过添加额外复杂度的控制参数在参数空间中连续封装。我们提出了两种方法。在隧道网络中,这种选择是在隐藏单元级别进行的,而在萌芽感知机中,这是在网络层级别进行的;更新此控制参数会引入另一个隐藏单元或层。我们在合成的双螺旋数据以及MNIST、MIRFLICKR和CIFAR这三个真实数据集上展示了我们方法的有效性,我们看到,在相同的超参数集下,我们提出的方法能够将网络复杂度正确地调整到任务复杂度。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验