• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

85% 规则,促进最佳学习。

The Eighty Five Percent Rule for optimal learning.

机构信息

Department of Psychology, University of Arizona, Tucson, AZ, USA.

Cognitive Science Program, University of Arizona, Tucson, AZ, USA.

出版信息

Nat Commun. 2019 Nov 5;10(1):4646. doi: 10.1038/s41467-019-12552-4.

DOI:10.1038/s41467-019-12552-4
PMID:31690723
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6831579/
Abstract

Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this 'Eighty Five Percent Rule' for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.

摘要

研究人员和教育工作者长期以来一直在探讨如何最好地教授他们的客户——无论是人类、非人类动物还是机器。在这里,我们研究了一个单一变量,即训练的难度,对学习速度的影响。在许多情况下,我们发现存在一个最佳点,在这个点上,训练既不太容易也不太难,并且学习进展最快。我们为一类广泛的学习算法在二进制分类任务的背景下推导出了这个最佳点的条件。对于所有这些基于随机梯度下降的学习算法,我们发现训练的最佳误差率约为 15.87%,或者换句话说,最佳训练精度约为 85%。我们展示了这个“百分之八十五规则”对于人工智能中使用的人工神经网络和被认为描述动物学习的生物上合理的神经网络的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/80a0ebcb42ca/41467_2019_12552_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/84f5d0818898/41467_2019_12552_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/2c6fcc28a6e3/41467_2019_12552_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/45524edd4ab7/41467_2019_12552_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/4420588f00dc/41467_2019_12552_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/80a0ebcb42ca/41467_2019_12552_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/84f5d0818898/41467_2019_12552_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/2c6fcc28a6e3/41467_2019_12552_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/45524edd4ab7/41467_2019_12552_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/4420588f00dc/41467_2019_12552_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/326d/6831579/80a0ebcb42ca/41467_2019_12552_Fig5_HTML.jpg

相似文献

1
The Eighty Five Percent Rule for optimal learning.85% 规则,促进最佳学习。
Nat Commun. 2019 Nov 5;10(1):4646. doi: 10.1038/s41467-019-12552-4.
2
Universality of gradient descent neural network training.梯度下降神经网络训练的通用性。
Neural Netw. 2022 Jun;150:259-273. doi: 10.1016/j.neunet.2022.02.016. Epub 2022 Mar 2.
3
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
4
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
5
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
6
Local online learning in recurrent networks with random feedback.具有随机反馈的递归网络中的局部在线学习。
Elife. 2019 May 24;8:e43299. doi: 10.7554/eLife.43299.
7
The general inefficiency of batch training for gradient descent learning.梯度下降学习中批量训练的总体低效性。
Neural Netw. 2003 Dec;16(10):1429-51. doi: 10.1016/S0893-6080(03)00138-2.
8
Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.优化神经网络在医学数据集上的应用:以新生儿呼吸暂停预测为例的研究
Artif Intell Med. 2019 Jul;98:59-76. doi: 10.1016/j.artmed.2019.07.008. Epub 2019 Jul 25.
9
One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks.退一步,进两步:递归神经网络中的干扰和学习。
Neural Comput. 2019 Oct;31(10):1985-2003. doi: 10.1162/neco_a_01222. Epub 2019 Aug 8.
10
Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.生物批量归一化:内在可塑性如何提高深度神经网络的学习能力。
PLoS One. 2020 Sep 23;15(9):e0238454. doi: 10.1371/journal.pone.0238454. eCollection 2020.

引用本文的文献

1
Is a single calibration for the TloadDback cognitive fatigue induction task reliable?针对TloadDback认知疲劳诱导任务的单次校准是否可靠?
Front Psychol. 2025 Jul 24;16:1561819. doi: 10.3389/fpsyg.2025.1561819. eCollection 2025.
2
Deep Learning Improves Parameter Estimation in Reinforcement Learning Models.深度学习改进强化学习模型中的参数估计。
bioRxiv. 2025 Jun 18:2025.03.21.644663. doi: 10.1101/2025.03.21.644663.
3
Implementation of a Recovery College Embedded in a Swedish Psychiatry Organization: Qualitative Case Study.

本文引用的文献

1
Toward a Rational and Mechanistic Account of Mental Effort.迈向对心理努力的理性与机械论解释
Annu Rev Neurosci. 2017 Jul 25;40:99-124. doi: 10.1146/annurev-neuro-072116-031526. Epub 2017 Mar 31.
2
Learning from Errors.从错误中学习。
Annu Rev Psychol. 2017 Jan 3;68:465-489. doi: 10.1146/annurev-psych-010416-044022. Epub 2016 Sep 14.
3
Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.基于心理努力的神经反馈闭环适应促进大脑自我调节的强化学习。
瑞典精神病学机构中康复学院的实施:定性案例研究
J Particip Med. 2024 Sep 12;16:e55882. doi: 10.2196/55882.
4
Pre-movement muscle co-contraction associated with motor performance deterioration under high reward conditions.高奖励条件下与运动表现恶化相关的运动前肌肉共同收缩。
Sci Rep. 2024 Jul 19;14(1):16710. doi: 10.1038/s41598-024-67630-5.
5
Choosing to learn: The importance of student autonomy in higher education.选择去学习:学生自主性在高等教育中的重要性。
Sci Adv. 2024 Jul 19;10(29):eado6759. doi: 10.1126/sciadv.ado6759. Epub 2024 Jul 17.
6
Leveraging Digital Workflows to Transition the Orthotics and Prosthetics Profession Toward a Client-Centric and Values-Based Care Model.利用数字工作流程推动矫形与假肢专业向以客户为中心和基于价值观的护理模式转变。
Can Prosthet Orthot J. 2023 Dec 22;6(2):42221. doi: 10.33137/cpoj.v6i2.42221. eCollection 2023.
7
A multi-institutional machine learning algorithm for prognosticating facial nerve injury following microsurgical resection of vestibular schwannoma.多机构机器学习算法用于预测前庭神经鞘瘤显微切除后面神经损伤。
Sci Rep. 2024 Jun 5;14(1):12963. doi: 10.1038/s41598-024-63161-1.
8
A Comparison of Veterans with Problematic Opioid Use Identified through Natural Language Processing of Clinical Notes versus Using Diagnostic Codes.通过临床记录自然语言处理识别出的有问题阿片类药物使用的退伍军人与使用诊断代码的退伍军人的比较。
Healthcare (Basel). 2024 Apr 6;12(7):799. doi: 10.3390/healthcare12070799.
9
Reward Reinforcement Creates Enduring Facilitation of Goal-directed Behavior.奖励强化会产生持久的目标导向行为促进作用。
J Cogn Neurosci. 2024 Dec 1;36(12):2847-2862. doi: 10.1162/jocn_a_02150.
10
Effect of immersive virtual reality-based cognitive remediation in patients with mood or psychosis spectrum disorders: study protocol for a randomized, controlled, double-blinded trial.沉浸式虚拟现实认知矫正对心境或精神病谱系障碍患者的影响:一项随机、对照、双盲试验的研究方案。
Trials. 2024 Jan 24;25(1):82. doi: 10.1186/s13063-024-07910-7.
Clin Neurophysiol. 2016 Sep;127(9):3156-3164. doi: 10.1016/j.clinph.2016.06.020. Epub 2016 Jun 27.
4
The relation between the sense of agency and the experience of flow.行动感与心流体验之间的关系。
Conscious Cogn. 2016 Jul;43:133-42. doi: 10.1016/j.concog.2016.06.001. Epub 2016 Jun 9.
5
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
6
The effects of task difficulty, novelty and the size of the search space on intrinsically motivated exploration.任务难度、新颖性和搜索空间大小对内在动机探索的影响。
Front Neurosci. 2014 Oct 14;8:317. doi: 10.3389/fnins.2014.00317. eCollection 2014.
7
The expected value of control: an integrative theory of anterior cingulate cortex function.控制的预期价值:前扣带皮层功能的综合理论。
Neuron. 2013 Jul 24;79(2):217-40. doi: 10.1016/j.neuron.2013.07.007.
8
Attention enhances synaptic efficacy and the signal-to-noise ratio in neural circuits.注意增强了神经回路中的突触效能和信噪比。
Nature. 2013 Jul 25;499(7459):476-80. doi: 10.1038/nature12276. Epub 2013 Jun 26.
9
The Goldilocks effect: human infants allocate attention to visual sequences that are neither too simple nor too complex.金发姑娘效应:人类婴儿会将注意力分配到既不太简单也不太复杂的视觉序列上。
PLoS One. 2012;7(5):e36399. doi: 10.1371/journal.pone.0036399. Epub 2012 May 23.
10
Metacognitive Judgments and Control of Study.元认知判断与学习控制
Curr Dir Psychol Sci. 2009 Jun 1;18(3):159-163. doi: 10.1111/j.1467-8721.2009.01628.x.