Suppr超能文献

通过二元分类探索神经网络中的不确定性原理。

Exploring the uncertainty principle in neural networks through binary classification.

作者信息

Zhang Jun-Jie, Chen Jian-Nan, Meng De-Yu, Wang Xiu-Cheng

机构信息

Northwest Institute of Nuclear Technology, Xi'an, 710024, Shaanxi, China.

School of Mathematics and Statistics and Ministry of Education Key Lab of Intelligent Networks and Network Security, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China.

出版信息

Sci Rep. 2024 Nov 18;14(1):28402. doi: 10.1038/s41598-024-79028-4.

Abstract

Neural networks are reported to be vulnerable under minor and imperceptible attacks. The underlying mechanism and quantitative measure of the vulnerability still remains to be revealed. In this study, we explore the intrinsic trade-off between accuracy and robustness in neural networks, framed through the lens of the "uncertainty principle". By examining the fundamental limitations imposed by this principle, we reveal how neural networks inherently balance precision in feature extraction with susceptibility to adversarial perturbations. Our analysis highlights that as neural networks achieve higher accuracy, their vulnerability to adversarial attacks increases, a phenomenon rooted in the uncertainty relation. By using the mathematics from quantum mechanics, we offer a theoretical foundation and analytical method for understanding the vulnerabilities of deep learning models.

摘要

据报道,神经网络在轻微且难以察觉的攻击下很脆弱。这种脆弱性的潜在机制和定量度量仍有待揭示。在本研究中,我们通过“不确定性原理”的视角,探索神经网络中准确性和鲁棒性之间的内在权衡。通过研究这一原理所施加的基本限制,我们揭示了神经网络如何在特征提取的精度与对抗性扰动的敏感性之间进行内在平衡。我们的分析强调,随着神经网络实现更高的准确性,它们对对抗性攻击的脆弱性会增加,这一现象源于不确定性关系。通过运用量子力学中的数学知识,我们为理解深度学习模型的脆弱性提供了理论基础和分析方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef1f/11570626/2a2ad39e1aea/41598_2024_79028_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验