• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于流式回归的分布式且带优先级的经验回放方法。

A divided and prioritized experience replay approach for streaming regression.

作者信息

Leite Arnø Mikkel, Godhavn John-Morten, Aamo Ole Morten

机构信息

Department of Engineering Cybernetics, Norwegian University of Science and Technology, Trondheim 7491, Norway.

Equinor Research Center, Ranheim 7053, Norway.

出版信息

MethodsX. 2021 Nov 12;8:101571. doi: 10.1016/j.mex.2021.101571. eCollection 2021.

DOI:10.1016/j.mex.2021.101571
PMID:35004205
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8720895/
Abstract

In the streaming learning setting, an agent is presented with a data stream on which to learn from in an online fashion. A common problem is catastrophic forgetting of old knowledge due to updates to the model. Mitigating catastrophic forgetting has received a lot of attention, and a variety of methods exist to solve this problem. In this paper, we present a divided and prioritized experience replay approach for streaming regression, in which relevant observations are retained in the replay, and extra focus is added to poorly estimated observations through prioritization. Using a real-world dataset, the method is compared to the standard sliding window approach. A statistical power analysis is performed, showing how our approach improves performance on rare, important events at a trade-off in performance for more common observations. Close inspections of the dataset are provided, with emphasis on areas where the standard approach fails. A rephrasing of the problem to a binary classification problem is performed to separate common and rare, important events. These results provide an added perspective regarding the improvement made on rare events.••.

摘要

在流式学习环境中,智能体以在线方式接收一个用于从中学习的数据流。一个常见问题是由于模型更新导致对旧知识的灾难性遗忘。减轻灾难性遗忘受到了很多关注,并且存在多种方法来解决这个问题。在本文中,我们提出了一种用于流式回归的划分和优先经验回放方法,其中相关观测被保留在回放中,并且通过优先级对估计不佳的观测给予额外关注。使用一个真实世界数据集,将该方法与标准滑动窗口方法进行比较。进行了统计功效分析,展示了我们的方法如何在对更常见观测的性能进行权衡的情况下,提高对罕见重要事件的性能。提供了对数据集的详细检查,重点关注标准方法失败的区域。将问题重新表述为一个二元分类问题,以分离常见和罕见的重要事件。这些结果为关于在罕见事件上所做改进提供了一个额外的视角。••.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/18bb6efd0b46/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/0ca0e297f03e/ga1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/11a7f58288fe/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/f3293e2651f5/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/9334330827c9/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/e2f08b6abc33/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/886c2567251a/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/00b1290d7dc2/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/e7f20afed133/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/18bb6efd0b46/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/0ca0e297f03e/ga1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/11a7f58288fe/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/f3293e2651f5/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/9334330827c9/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/e2f08b6abc33/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/886c2567251a/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/00b1290d7dc2/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/e7f20afed133/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1d9d/8720895/18bb6efd0b46/gr7.jpg

相似文献

1
A divided and prioritized experience replay approach for streaming regression.一种用于流式回归的分布式且带优先级的经验回放方法。
MethodsX. 2021 Nov 12;8:101571. doi: 10.1016/j.mex.2021.101571. eCollection 2021.
2
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.
3
Map-based experience replay: a memory-efficient solution to catastrophic forgetting in reinforcement learning.基于映射的经验回放:强化学习中灾难性遗忘的一种内存高效解决方案。
Front Neurorobot. 2023 Jun 27;17:1127642. doi: 10.3389/fnbot.2023.1127642. eCollection 2023.
4
Generative negative replay for continual learning.生成式负样本重放用于连续学习。
Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.
5
Prioritized experience replay based on dynamics priority.基于动态优先级的优先经验回放
Sci Rep. 2024 Mar 12;14(1):6014. doi: 10.1038/s41598-024-56673-3.
6
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
7
Online feature selection with streaming features.在线流特征的特征选择。
IEEE Trans Pattern Anal Mach Intell. 2013 May;35(5):1178-92. doi: 10.1109/TPAMI.2012.197.
8
Designing a Streaming Algorithm for Outlier Detection in Data Mining-An Incrementa Approach.设计一种用于数据挖掘中异常值检测的流算法-一种增量方法。
Sensors (Basel). 2020 Feb 26;20(5):1261. doi: 10.3390/s20051261.
9
LwF-ECG: Learning-without-forgetting approach for electrocardiogram heartbeat classification based on memory with task selector.基于记忆与任务选择器的遗忘学习心电图心拍分类方法
Comput Biol Med. 2021 Oct;137:104807. doi: 10.1016/j.compbiomed.2021.104807. Epub 2021 Aug 27.
10
Human hippocampal replay during rest prioritizes weakly learned information and predicts memory performance.人类在休息时的海马体回放优先处理较弱学习到的信息,并预测记忆表现。
Nat Commun. 2018 Sep 25;9(1):3920. doi: 10.1038/s41467-018-06213-1.

本文引用的文献

1
Dynamic Structure Embedded Online Multiple-Output Regression for Streaming Data.用于流数据的动态结构嵌入在线多输出回归
IEEE Trans Pattern Anal Mach Intell. 2019 Feb;41(2):323-336. doi: 10.1109/TPAMI.2018.2794446. Epub 2018 Jan 17.
2
Learning without Forgetting.学过不忘。
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2935-2947. doi: 10.1109/TPAMI.2017.2773081. Epub 2017 Nov 14.
3
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
4
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.
5
Incremental learning of concept drift in nonstationary environments.非平稳环境中概念漂移的增量学习
IEEE Trans Neural Netw. 2011 Oct;22(10):1517-31. doi: 10.1109/TNN.2011.2160459. Epub 2011 Aug 4.