• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

声学场景分类中的在线持续学习:一项实证研究。

Online Continual Learning in Acoustic Scene Classification: An Empirical Study.

作者信息

Ha Donghee, Kim Mooseop, Jeong Chi Yoon

机构信息

Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute, 218 Gajeong-ro, Daejeon 34129, Republic of Korea.

Artificial Intelligence, University of Science and Technology, 217 Gajeong-ro, Daejeon 34113, Republic of Korea.

出版信息

Sensors (Basel). 2023 Aug 3;23(15):6893. doi: 10.3390/s23156893.

DOI:10.3390/s23156893
PMID:37571676
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10422258/
Abstract

Numerous deep learning methods for acoustic scene classification (ASC) have been proposed to improve the classification accuracy of sound events. However, only a few studies have focused on continual learning (CL) wherein a model continually learns to solve issues with task changes. Therefore, in this study, we systematically analyzed the performance of ten recent CL methods to provide guidelines regarding their performances. The CL methods included two regularization-based methods and eight replay-based methods. First, we defined realistic and difficult scenarios such as online class-incremental (OCI) and online domain-incremental (ODI) cases for three public sound datasets. Then, we systematically analyzed the performance of each CL method in terms of average accuracy, average forgetting, and training time. In OCI scenarios, iCaRL and SCR showed the best performance for small buffer sizes, and GDumb showed the best performance for large buffer sizes. In ODI scenarios, SCR adopting supervised contrastive learning consistently outperformed the other methods, regardless of the memory buffer size. Most replay-based methods have an almost constant training time, regardless of the memory buffer size, and their performance increases with an increase in the memory buffer size. Based on these results, we must first consider GDumb/SCR for the continual learning methods for ASC.

摘要

为了提高声音事件的分类准确率,人们提出了许多用于声学场景分类(ASC)的深度学习方法。然而,只有少数研究关注持续学习(CL),即模型持续学习以解决任务变化带来的问题。因此,在本研究中,我们系统地分析了十种近期的持续学习方法的性能,以提供有关它们性能的指导。这些持续学习方法包括两种基于正则化的方法和八种基于重放的方法。首先,我们为三个公共声音数据集定义了现实且具有挑战性的场景,如在线类别增量(OCI)和在线域增量(ODI)情况。然后,我们从平均准确率、平均遗忘率和训练时间方面系统地分析了每种持续学习方法的性能。在OCI场景中,对于小缓冲区大小,iCaRL和SCR表现最佳,而对于大缓冲区大小,GDumb表现最佳。在ODI场景中,采用监督对比学习的SCR无论内存缓冲区大小如何,始终优于其他方法。大多数基于重放的方法,无论内存缓冲区大小如何,训练时间几乎恒定,并且它们的性能随着内存缓冲区大小的增加而提高。基于这些结果,对于ASC持续学习方法,我们首先应考虑GDumb/SCR。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/0307715f3bfe/sensors-23-06893-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/41c0253401bf/sensors-23-06893-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/80394ef3e7b9/sensors-23-06893-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/45f154714dea/sensors-23-06893-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/0307715f3bfe/sensors-23-06893-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/41c0253401bf/sensors-23-06893-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/80394ef3e7b9/sensors-23-06893-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/45f154714dea/sensors-23-06893-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/287b/10422258/0307715f3bfe/sensors-23-06893-g004.jpg

相似文献

1
Online Continual Learning in Acoustic Scene Classification: An Empirical Study.声学场景分类中的在线持续学习:一项实证研究。
Sensors (Basel). 2023 Aug 3;23(15):6893. doi: 10.3390/s23156893.
2
CeCR: Cross-entropy contrastive replay for online class-incremental continual learning.CeCR:用于在线类增量持续学习的交叉熵对比重放。
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
3
Privacy-preserving continual learning methods for medical image classification: a comparative analysis.用于医学图像分类的隐私保护持续学习方法:比较分析
Front Med (Lausanne). 2023 Aug 14;10:1227515. doi: 10.3389/fmed.2023.1227515. eCollection 2023.
4
CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification.CLRS:用于遥感图像场景分类的持续学习基准
Sensors (Basel). 2020 Feb 24;20(4):1226. doi: 10.3390/s20041226.
5
Generative negative replay for continual learning.生成式负样本重放用于连续学习。
Neural Netw. 2023 May;162:369-383. doi: 10.1016/j.neunet.2023.03.006. Epub 2023 Mar 9.
6
Continual learning with attentive recurrent neural networks for temporal data classification.用于时态数据分类的基于注意力循环神经网络的持续学习
Neural Netw. 2023 Jan;158:171-187. doi: 10.1016/j.neunet.2022.10.031. Epub 2022 Nov 11.
7
Prototype-Guided Memory Replay for Continual Learning.用于持续学习的原型引导记忆回放
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10973-10983. doi: 10.1109/TNNLS.2023.3246049. Epub 2024 Aug 5.
8
Lifelong Adaptive Machine Learning for Sensor-Based Human Activity Recognition Using Prototypical Networks.基于原型网络的基于传感器的人体活动识别的终身自适应机器学习。
Sensors (Basel). 2022 Sep 12;22(18):6881. doi: 10.3390/s22186881.
9
Continual learning in medical image analysis: A survey.医学图像分析中的持续学习:综述。
Comput Biol Med. 2024 Nov;182:109206. doi: 10.1016/j.compbiomed.2024.109206. Epub 2024 Sep 26.
10
Rethinking exemplars for continual semantic segmentation in endoscopy scenes: Entropy-based mini-batch pseudo-replay.重新思考内窥镜场景中持续语义分割的范例:基于熵的小批量伪重放。
Comput Biol Med. 2023 Oct;165:107412. doi: 10.1016/j.compbiomed.2023.107412. Epub 2023 Aug 30.

本文引用的文献

1
Class-Incremental Learning: Survey and Performance Evaluation on Image Classification.类别增量学习:图像分类的综述与性能评估
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5513-5533. doi: 10.1109/TPAMI.2022.3213473. Epub 2023 Apr 3.
2
Environmental sound classification using temporal-frequency attention based convolutional neural network.基于时频注意力的卷积神经网络的环境声音分类。
Sci Rep. 2021 Nov 3;11(1):21552. doi: 10.1038/s41598-021-01045-4.
3
Replay in Deep Learning: Current Approaches and Missing Biological Elements.
深度学习中的再现:当前方法和缺失的生物学元素。
Neural Comput. 2021 Oct 12;33(11):2908-2950. doi: 10.1162/neco_a_01433.
4
Accelerating On-Device Learning with Layer-Wise Processor Selection Method on Unified Memory.使用统一内存上的逐层处理器选择方法加速设备端学习
Sensors (Basel). 2021 Mar 29;21(7):2364. doi: 10.3390/s21072364.
5
A Continual Learning Survey: Defying Forgetting in Classification Tasks.持续学习调查:在分类任务中对抗遗忘
IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3366-3385. doi: 10.1109/TPAMI.2021.3057446. Epub 2022 Jun 3.
6
A comprehensive study of class incremental learning algorithms for visual tasks.面向视觉任务的类增量学习算法的综合研究。
Neural Netw. 2021 Mar;135:38-54. doi: 10.1016/j.neunet.2020.12.003. Epub 2020 Dec 8.
7
Continual lifelong learning with neural networks: A review.神经网络的持续终身学习:综述。
Neural Netw. 2019 May;113:54-71. doi: 10.1016/j.neunet.2019.01.012. Epub 2019 Feb 6.
8
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
9
Distance-based image classification: generalizing to new classes at near-zero cost.基于距离的图像分类:以近乎零的成本推广到新类别。
IEEE Trans Pattern Anal Mach Intell. 2013 Nov;35(11):2624-37. doi: 10.1109/TPAMI.2013.83.