Suppr超能文献

通过有限值量子涨落优化神经网络。

Optimization of neural networks via finite-value quantum fluctuations.

机构信息

Graduate School of Information Sciences, Tohoku University, Sendai, 980-8579, Japan.

Electronics Research and Innovation Division, DENSO Corporation, Chuo-ku, Tokyo, 103-6015, Japan.

出版信息

Sci Rep. 2018 Jul 2;8(1):9950. doi: 10.1038/s41598-018-28212-4.

Abstract

We numerically test an optimization method for deep neural networks (DNNs) using quantum fluctuations inspired by quantum annealing. For efficient optimization, our method utilizes the quantum tunneling effect beyond the potential barriers. The path integral formulation of the DNN optimization generates an attracting force to simulate the quantum tunneling effect. In the standard quantum annealing method, the quantum fluctuations will vanish at the last stage of optimization. In this study, we propose a learning protocol that utilizes a finite value for quantum fluctuations strength to obtain higher generalization performance, which is a type of robustness. We demonstrate the performance of our method using two well-known open datasets: the MNIST dataset and the Olivetti face dataset. Although computational costs prevent us from testing our method on large datasets with high-dimensional data, results show that our method can enhance generalization performance by induction of the finite value for quantum fluctuations.

摘要

我们使用受量子退火启发的量子涨落对深度神经网络 (DNN) 的优化方法进行了数值测试。为了实现高效的优化,我们的方法利用了超越势垒的量子隧道效应。DNN 优化的路径积分公式产生了一种吸引力,以模拟量子隧道效应。在标准的量子退火方法中,量子涨落将在优化的最后阶段消失。在这项研究中,我们提出了一种学习协议,该协议利用量子涨落强度的有限值来获得更高的泛化性能,这是一种稳健性。我们使用两个著名的公开数据集:MNIST 数据集和 Olivetti 人脸数据集,来证明我们方法的性能。虽然计算成本使我们无法在具有高维数据的大型数据集上测试我们的方法,但结果表明,通过引入量子涨落的有限值,我们的方法可以提高泛化性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e39d/6028692/d00d922b3b92/41598_2018_28212_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验