• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于多智能体强化学习的自动滤波器剪枝方法。

A multi-agent reinforcement learning based approach for automatic filter pruning.

作者信息

Li Zhemin, Zuo Xiaojing, Song Yiping, Liang Dong, Xie Zheng

机构信息

College of Sciences, National University of Defense Technology, 410073, Changsha, China.

出版信息

Sci Rep. 2024 Dec 28;14(1):31193. doi: 10.1038/s41598-024-82562-w.

DOI:10.1038/s41598-024-82562-w
PMID:39730902
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11681029/
Abstract

Deep Convolutional Neural Networks (DCNNs), due to their high computational and memory requirements, face significant challenges in deployment on resource-constrained devices. Network Pruning, an essential model compression technique, contributes to enabling the efficient deployment of DCNNs on such devices. Compared to traditional rule-based pruning methods, Reinforcement Learning(RL)-based automatic pruning often yields more effective pruning strategies through its ability to learn and adapt. However, the current research only set a single agent to explore the optimal pruning rate for all convolutional layers, ignoring the interactions and effects among multiple layers. To address this challenge, this paper proposes an automatic Filter Pruning method with a multi-agent reinforcement learning algorithm QMIX, named QMIX_FP. The multi-layer structure of DCNNs is modeled as a multi-agent system, which considers the varying sensitivity of each convolutional layer to the entire DCNN and the interactions among them. We employ the multi-agent reinforcement learning algorithm QMIX, where individual agent contributes to the system monotonically, to explore the optimal iterative pruning strategy for each convolutional layer. Furthermore, fine-tuning the pruned network using knowledge distillation accelerates model performance improvement. The efficiency of this method is demonstrated on two benchmark DCNNs, including VGG-16 and AlexNet, over CIFAR-10 and CIFAR-100 datasets. Extensive experiments under different scenarios show that QMIX_FP not only reduces the computational and memory requirements of the networks but also maintains their accuracy, making it a significant advancement in the field of model compression and efficient deployment of deep learning models on resource-constrained devices.

摘要

深度卷积神经网络(DCNNs)由于其高计算和内存需求,在资源受限设备上部署时面临重大挑战。网络剪枝作为一种重要的模型压缩技术,有助于在这类设备上实现DCNNs的高效部署。与传统的基于规则的剪枝方法相比,基于强化学习(RL)的自动剪枝通常通过其学习和适应能力产生更有效的剪枝策略。然而,当前的研究只设置了一个智能体来探索所有卷积层的最优剪枝率,忽略了多层之间的相互作用和影响。为了应对这一挑战,本文提出了一种采用多智能体强化学习算法QMIX的自动滤波器剪枝方法,即QMIX_FP。DCNNs的多层结构被建模为一个多智能体系统,该系统考虑了每个卷积层对整个DCNN的不同敏感度以及它们之间的相互作用。我们采用多智能体强化学习算法QMIX,其中单个智能体对系统有单调贡献,以探索每个卷积层的最优迭代剪枝策略。此外,使用知识蒸馏对剪枝后的网络进行微调可加速模型性能提升。在包括VGG - 16和AlexNet在内的两个基准DCNNs上,针对CIFAR - 10和CIFAR - 100数据集证明了该方法的有效性。在不同场景下的大量实验表明,QMIX_FP不仅降低了网络的计算和内存需求,还保持了其准确性,使其成为模型压缩领域以及在资源受限设备上高效部署深度学习模型方面的一项重大进展。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/3249d5c01148/41598_2024_82562_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/63a3bca64b82/41598_2024_82562_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/d4e7d0d70ebe/41598_2024_82562_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/acaad74af297/41598_2024_82562_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/0e507c4e032d/41598_2024_82562_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/42e30e3cf6c7/41598_2024_82562_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/3249d5c01148/41598_2024_82562_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/63a3bca64b82/41598_2024_82562_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/d4e7d0d70ebe/41598_2024_82562_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/acaad74af297/41598_2024_82562_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/0e507c4e032d/41598_2024_82562_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/42e30e3cf6c7/41598_2024_82562_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ead4/11681029/3249d5c01148/41598_2024_82562_Fig5_HTML.jpg

相似文献

1
A multi-agent reinforcement learning based approach for automatic filter pruning.一种基于多智能体强化学习的自动滤波器剪枝方法。
Sci Rep. 2024 Dec 28;14(1):31193. doi: 10.1038/s41598-024-82562-w.
2
Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network.进化多目标单拍滤波器剪枝用于设计轻量级卷积神经网络。
Sensors (Basel). 2021 Sep 2;21(17):5901. doi: 10.3390/s21175901.
3
Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.基于遗传小波通道搜索的动态传统神经网络通道剪枝用于图像分类
Front Comput Neurosci. 2021 Oct 27;15:760554. doi: 10.3389/fncom.2021.760554. eCollection 2021.
4
PCA driven mixed filter pruning for efficient convNets.基于 PCA 的混合滤波器剪枝算法在高效卷积神经网络中的应用。
PLoS One. 2022 Jan 24;17(1):e0262386. doi: 10.1371/journal.pone.0262386. eCollection 2022.
5
Weak sub-network pruning for strong and efficient neural networks.弱子网络剪枝技术:构建强大而高效的神经网络
Neural Netw. 2021 Dec;144:614-626. doi: 10.1016/j.neunet.2021.09.015. Epub 2021 Sep 30.
6
Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.剪枝前添加:通过辅助注意力实现深度卷积神经网络的稀疏滤波器融合
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):3930-3942. doi: 10.1109/TNNLS.2021.3106917. Epub 2025 Feb 28.
7
Model pruning based on filter similarity for edge device deployment.基于滤波器相似度的模型剪枝用于边缘设备部署。
Front Neurorobot. 2023 Mar 2;17:1132679. doi: 10.3389/fnbot.2023.1132679. eCollection 2023.
8
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks.基于差分进化的逐层权值剪枝压缩深度神经网络。
Sensors (Basel). 2021 Jan 28;21(3):880. doi: 10.3390/s21030880.
9
Filter pruning for convolutional neural networks in semantic image segmentation.卷积神经网络的语义图像分割中的滤波器剪枝。
Neural Netw. 2024 Jan;169:713-732. doi: 10.1016/j.neunet.2023.11.010. Epub 2023 Nov 7.
10
Hierarchical Threshold Pruning Based on Uniform Response Criterion.基于均匀响应准则的分层阈值修剪
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):10869-10881. doi: 10.1109/TNNLS.2023.3244994. Epub 2024 Aug 5.

本文引用的文献

1
A real-time constellation image classification method of wireless communication signals based on the lightweight network MobileViT.一种基于轻量级网络MobileViT的无线通信信号实时星座图像分类方法。
Cogn Neurodyn. 2024 Apr;18(2):659-671. doi: 10.1007/s11571-023-10015-7. Epub 2023 Oct 10.
2
Memristor-Based Neural Network Circuit of Associative Memory With Overshadowing and Emotion Congruent Effect.具有遮蔽效应和情绪一致性效应的基于忆阻器的联想记忆神经网络电路
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3618-3630. doi: 10.1109/TNNLS.2023.3348553. Epub 2025 Feb 6.
3
Structured Pruning for Deep Convolutional Neural Networks: A Survey.
深度卷积神经网络的结构化剪枝:综述
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):2900-2919. doi: 10.1109/TPAMI.2023.3334614. Epub 2024 Apr 3.
4
HRel: Filter pruning based on High Relevance between activation maps and class labels.HRel:基于激活图与类别标签之间的高相关性的滤波器修剪。
Neural Netw. 2022 Mar;147:186-197. doi: 10.1016/j.neunet.2021.12.017. Epub 2021 Dec 30.
5
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.