• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多目标强化学习的高效人像解析神经架构搜索

Multiobjective Reinforcement Learning-Based Neural Architecture Search for Efficient Portrait Parsing.

作者信息

Lyu Bo, Wen Shiping, Shi Kaibo, Huang Tingwen

出版信息

IEEE Trans Cybern. 2023 Feb;53(2):1158-1169. doi: 10.1109/TCYB.2021.3104866. Epub 2023 Jan 13.

DOI:10.1109/TCYB.2021.3104866
PMID:34460412
Abstract

This article dedicates to automatically explore efficient portrait parsing models that are easily deployed in edge computing or terminal devices. In the interest of the tradeoff between the resource cost and performance, we design the multiobjective reinforcement learning (RL)-based neural architecture search (NAS) scheme, which comprehensively balances the accuracy, parameters, FLOPs, and inference latency. Finally, under varying hyperparameter configurations, the search procedure emits a bunch of excellent objective-oriented architectures. The combination of two-stage training with precomputing and memory-resident feature maps effectively reduces the time consumption of the RL-based NAS method, so that we complete approximately 1000 search iterations in two GPU days. To accelerate the convergence of the lightweight candidate architecture, we incorporate knowledge distillation into the training of the search process. This also provides a reasonable evaluation signal to the RL controller that enables it to converge well. In the end, we conduct full training with outstanding Pareto-optimal architectures, so that a series of excellent portrait parsing models (with only approximately 0.3M parameters) is received. Furthermore, we directly transfer the architectures searched on CelebAMask-HQ (Portrait Parsing) to other portrait and face segmentation tasks. Finally, we achieve the state-of-the-art performance of 96.5% MIOU on EG1800 (portrait segmentation) and 91.6% overall F1 -score on HELEN (face labeling). That is, our models significantly surpass the artificial network on the accuracy, but with lower resource consumption and higher real-time performance.

摘要

本文致力于自动探索可轻松部署在边缘计算或终端设备中的高效人像解析模型。出于资源成本与性能之间权衡的考虑,我们设计了基于多目标强化学习(RL)的神经架构搜索(NAS)方案,该方案全面平衡了准确性、参数、浮点运算次数(FLOPs)和推理延迟。最后,在不同的超参数配置下,搜索过程会生成一系列出色的面向目标的架构。两阶段训练与预计算和内存驻留特征图的结合有效地减少了基于RL的NAS方法的时间消耗,从而使我们在两个GPU日中完成了大约1000次搜索迭代。为了加速轻量级候选架构的收敛,我们将知识蒸馏纳入搜索过程的训练中。这也为RL控制器提供了合理的评估信号,使其能够很好地收敛。最后,我们使用出色的帕累托最优架构进行全训练,从而获得了一系列出色的人像解析模型(参数仅约0.3M)。此外,我们直接将在CelebAMask-HQ(人像解析)上搜索到的架构转移到其他人像和面部分割任务中。最后,我们在EG1800(人像分割)上实现了96.5%的平均交并比(MIOU)和在HELEN(面部标注)上实现了91.6%的总体F1分数的最优性能。也就是说,我们的模型在准确性上显著超越了人工网络,但资源消耗更低且实时性能更高。

相似文献

1
Multiobjective Reinforcement Learning-Based Neural Architecture Search for Efficient Portrait Parsing.基于多目标强化学习的高效人像解析神经架构搜索
IEEE Trans Cybern. 2023 Feb;53(2):1158-1169. doi: 10.1109/TCYB.2021.3104866. Epub 2023 Jan 13.
2
Neural Architecture Search for Portrait Parsing.
IEEE Trans Neural Netw Learn Syst. 2023 Mar;34(3):1112-1121. doi: 10.1109/TNNLS.2021.3104872. Epub 2023 Feb 28.
3
EMONAS-Net: Efficient multiobjective neural architecture search using surrogate-assisted evolutionary algorithm for 3D medical image segmentation.EMONAS-Net:基于代理辅助进化算法的高效多目标神经架构搜索在 3D 医学图像分割中的应用。
Artif Intell Med. 2021 Sep;119:102154. doi: 10.1016/j.artmed.2021.102154. Epub 2021 Aug 24.
4
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization.基于 Pareto-Bayesian 优化的高效资源感知卷积神经网络架构搜索在边缘计算中的应用。
Sensors (Basel). 2021 Jan 10;21(2):444. doi: 10.3390/s21020444.
5
Deeply Supervised Block-Wise Neural Architecture Search.深度监督的逐块神经架构搜索
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2451-2464. doi: 10.1109/TNNLS.2023.3347542. Epub 2025 Feb 6.
6
BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture.BNAS:使用广泛可扩展架构的高效神经架构搜索。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):5004-5018. doi: 10.1109/TNNLS.2021.3067028. Epub 2022 Aug 31.
7
NAS-HRIS: Automatic Design and Architecture Search of Neural Network for Semantic Segmentation in Remote Sensing Images.NAS-HRIS:遥感图像语义分割中神经网络的自动设计与体系结构搜索。
Sensors (Basel). 2020 Sep 16;20(18):5292. doi: 10.3390/s20185292.
8
ReCNAS: Resource-Constrained Neural Architecture Search Based on Differentiable Annealing and Dynamic Pruning.ReCNAS:基于可微退火和动态剪枝的资源受限神经网络架构搜索
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2805-2819. doi: 10.1109/TNNLS.2022.3192169. Epub 2024 Feb 5.
9
A Gradient-Guided Evolutionary Neural Architecture Search.一种梯度引导的进化神经网络架构搜索。
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4345-4357. doi: 10.1109/TNNLS.2024.3371432. Epub 2025 Feb 28.
10
Improving Differentiable Architecture Search via self-distillation.通过自蒸馏改进可微架构搜索。
Neural Netw. 2023 Oct;167:656-667. doi: 10.1016/j.neunet.2023.08.062. Epub 2023 Sep 9.