Suppr超能文献

元学习及其应用的信息论泛化界

Information-Theoretic Generalization Bounds for Meta-Learning and Applications.

作者信息

Jose Sharu Theresa, Simeone Osvaldo

机构信息

Department of Engineering, King's College London, London WC2R 2LS, UK.

出版信息

Entropy (Basel). 2021 Jan 19;23(1):126. doi: 10.3390/e23010126.

Abstract

Meta-learning, or "learning to learn", refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.

摘要

元学习,即“学会学习”,指的是从与多个相关任务对应的数据中推断归纳偏差的技术,目的是提高针对新的、之前未观察到的任务的样本效率。元学习的一个关键性能指标是元泛化差距,即元训练数据上测得的平均损失与新的、随机选择的任务上测得的平均损失之间的差异。本文给出了关于元泛化差距的新的信息论上界。考虑了两大类元学习算法,一类使用单独的任务内训练集和测试集,如模型无关元学习(MAML),另一类使用联合的任务内训练集和测试集,如爬行动物算法。扩展传统学习的现有工作,为前一类算法推导出了一个依赖于元学习算法输出与其输入元训练数据之间互信息(MI)的元泛化差距上界。对于后一类算法,推导的界包括任务内学习过程的输出与相应数据集之间的额外互信息,以捕捉任务内的不确定性。然后通过新的个体任务互信息(ITMI)界为这两类算法推导出更紧的界。最后讨论了推导界的应用,包括一大类用于元学习的有噪声迭代算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/235d/7835863/d9c5f2c7dd3f/entropy-23-00126-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验